Test Report: KVM_Linux_crio 20383

                    
                      6d8453a169b79d4a3a523103ba23ea73f71b9b0b:2025-02-10:38292
                    
                

Test fail (10/327)

x
+
TestAddons/parallel/Ingress (156.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-234038 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-234038 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-234038 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b817e0be-815e-46cb-8d43-a875a079b5d0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b817e0be-815e-46cb-8d43-a875a079b5d0] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.003348338s
I0210 12:08:35.131754  632352 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-234038 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-234038 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.596461572s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-234038 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-234038 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.247
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-234038 -n addons-234038
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-234038 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-234038 logs -n 25: (1.178902729s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-131804                                                                     | download-only-131804 | jenkins | v1.35.0 | 10 Feb 25 12:05 UTC | 10 Feb 25 12:05 UTC |
	| delete  | -p download-only-152629                                                                     | download-only-152629 | jenkins | v1.35.0 | 10 Feb 25 12:05 UTC | 10 Feb 25 12:05 UTC |
	| delete  | -p download-only-131804                                                                     | download-only-131804 | jenkins | v1.35.0 | 10 Feb 25 12:05 UTC | 10 Feb 25 12:05 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-795954 | jenkins | v1.35.0 | 10 Feb 25 12:05 UTC |                     |
	|         | binary-mirror-795954                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41035                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-795954                                                                     | binary-mirror-795954 | jenkins | v1.35.0 | 10 Feb 25 12:05 UTC | 10 Feb 25 12:05 UTC |
	| addons  | enable dashboard -p                                                                         | addons-234038        | jenkins | v1.35.0 | 10 Feb 25 12:05 UTC |                     |
	|         | addons-234038                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-234038        | jenkins | v1.35.0 | 10 Feb 25 12:05 UTC |                     |
	|         | addons-234038                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-234038 --wait=true                                                                | addons-234038        | jenkins | v1.35.0 | 10 Feb 25 12:05 UTC | 10 Feb 25 12:07 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-234038 addons disable                                                                | addons-234038        | jenkins | v1.35.0 | 10 Feb 25 12:07 UTC | 10 Feb 25 12:07 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-234038 addons disable                                                                | addons-234038        | jenkins | v1.35.0 | 10 Feb 25 12:07 UTC | 10 Feb 25 12:07 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-234038        | jenkins | v1.35.0 | 10 Feb 25 12:07 UTC | 10 Feb 25 12:07 UTC |
	|         | -p addons-234038                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-234038 addons                                                                        | addons-234038        | jenkins | v1.35.0 | 10 Feb 25 12:08 UTC | 10 Feb 25 12:08 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-234038 addons disable                                                                | addons-234038        | jenkins | v1.35.0 | 10 Feb 25 12:08 UTC | 10 Feb 25 12:08 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-234038 addons disable                                                                | addons-234038        | jenkins | v1.35.0 | 10 Feb 25 12:08 UTC | 10 Feb 25 12:08 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-234038 ip                                                                            | addons-234038        | jenkins | v1.35.0 | 10 Feb 25 12:08 UTC | 10 Feb 25 12:08 UTC |
	| addons  | addons-234038 addons disable                                                                | addons-234038        | jenkins | v1.35.0 | 10 Feb 25 12:08 UTC | 10 Feb 25 12:08 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-234038 addons                                                                        | addons-234038        | jenkins | v1.35.0 | 10 Feb 25 12:08 UTC | 10 Feb 25 12:08 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-234038 addons                                                                        | addons-234038        | jenkins | v1.35.0 | 10 Feb 25 12:08 UTC | 10 Feb 25 12:08 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-234038 addons                                                                        | addons-234038        | jenkins | v1.35.0 | 10 Feb 25 12:08 UTC | 10 Feb 25 12:08 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-234038 ssh cat                                                                       | addons-234038        | jenkins | v1.35.0 | 10 Feb 25 12:08 UTC | 10 Feb 25 12:08 UTC |
	|         | /opt/local-path-provisioner/pvc-5a8361a7-be5f-41c0-89c9-8e967fbf6923_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-234038 addons disable                                                                | addons-234038        | jenkins | v1.35.0 | 10 Feb 25 12:08 UTC | 10 Feb 25 12:09 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-234038 ssh curl -s                                                                   | addons-234038        | jenkins | v1.35.0 | 10 Feb 25 12:08 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-234038 addons                                                                        | addons-234038        | jenkins | v1.35.0 | 10 Feb 25 12:08 UTC | 10 Feb 25 12:08 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-234038 addons                                                                        | addons-234038        | jenkins | v1.35.0 | 10 Feb 25 12:08 UTC | 10 Feb 25 12:08 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-234038 ip                                                                            | addons-234038        | jenkins | v1.35.0 | 10 Feb 25 12:10 UTC | 10 Feb 25 12:10 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 12:05:25
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 12:05:25.722790  632952 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:05:25.723069  632952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:05:25.723081  632952 out.go:358] Setting ErrFile to fd 2...
	I0210 12:05:25.723086  632952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:05:25.723270  632952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
	I0210 12:05:25.723933  632952 out.go:352] Setting JSON to false
	I0210 12:05:25.724983  632952 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":13676,"bootTime":1739175450,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 12:05:25.725049  632952 start.go:139] virtualization: kvm guest
	I0210 12:05:25.727332  632952 out.go:177] * [addons-234038] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 12:05:25.729152  632952 notify.go:220] Checking for updates...
	I0210 12:05:25.729219  632952 out.go:177]   - MINIKUBE_LOCATION=20383
	I0210 12:05:25.730814  632952 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 12:05:25.732446  632952 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 12:05:25.733846  632952 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	I0210 12:05:25.735237  632952 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 12:05:25.736531  632952 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 12:05:25.737908  632952 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 12:05:25.770800  632952 out.go:177] * Using the kvm2 driver based on user configuration
	I0210 12:05:25.772447  632952 start.go:297] selected driver: kvm2
	I0210 12:05:25.772465  632952 start.go:901] validating driver "kvm2" against <nil>
	I0210 12:05:25.772478  632952 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 12:05:25.773201  632952 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 12:05:25.773286  632952 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20383-625153/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 12:05:25.789966  632952 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 12:05:25.790032  632952 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 12:05:25.790400  632952 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 12:05:25.790447  632952 cni.go:84] Creating CNI manager for ""
	I0210 12:05:25.790514  632952 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 12:05:25.790526  632952 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0210 12:05:25.790589  632952 start.go:340] cluster config:
	{Name:addons-234038 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-234038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0210 12:05:25.790737  632952 iso.go:125] acquiring lock: {Name:mk013d189757e85c0699014f5ef29205d9d4927f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 12:05:25.792810  632952 out.go:177] * Starting "addons-234038" primary control-plane node in "addons-234038" cluster
	I0210 12:05:25.794510  632952 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 12:05:25.794593  632952 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0210 12:05:25.794607  632952 cache.go:56] Caching tarball of preloaded images
	I0210 12:05:25.794698  632952 preload.go:172] Found /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 12:05:25.794711  632952 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0210 12:05:25.795112  632952 profile.go:143] Saving config to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/config.json ...
	I0210 12:05:25.795152  632952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/config.json: {Name:mk85bb483e022000a5b30f867fca956e8f7aedba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:05:25.795363  632952 start.go:360] acquireMachinesLock for addons-234038: {Name:mk28e87da66de739a4c7c70d1fb5afc4ce31a4d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 12:05:25.795417  632952 start.go:364] duration metric: took 38.231µs to acquireMachinesLock for "addons-234038"
	I0210 12:05:25.795439  632952 start.go:93] Provisioning new machine with config: &{Name:addons-234038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-234038 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 12:05:25.795508  632952 start.go:125] createHost starting for "" (driver="kvm2")
	I0210 12:05:25.797432  632952 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0210 12:05:25.797596  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:05:25.797649  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:05:25.813074  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I0210 12:05:25.813663  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:05:25.814344  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:05:25.814368  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:05:25.814753  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:05:25.814919  632952 main.go:141] libmachine: (addons-234038) Calling .GetMachineName
	I0210 12:05:25.815094  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:05:25.815263  632952 start.go:159] libmachine.API.Create for "addons-234038" (driver="kvm2")
	I0210 12:05:25.815298  632952 client.go:168] LocalClient.Create starting
	I0210 12:05:25.815348  632952 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem
	I0210 12:05:25.974059  632952 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem
	I0210 12:05:26.310903  632952 main.go:141] libmachine: Running pre-create checks...
	I0210 12:05:26.310930  632952 main.go:141] libmachine: (addons-234038) Calling .PreCreateCheck
	I0210 12:05:26.311437  632952 main.go:141] libmachine: (addons-234038) Calling .GetConfigRaw
	I0210 12:05:26.311919  632952 main.go:141] libmachine: Creating machine...
	I0210 12:05:26.311936  632952 main.go:141] libmachine: (addons-234038) Calling .Create
	I0210 12:05:26.312109  632952 main.go:141] libmachine: (addons-234038) creating KVM machine...
	I0210 12:05:26.312131  632952 main.go:141] libmachine: (addons-234038) creating network...
	I0210 12:05:26.313476  632952 main.go:141] libmachine: (addons-234038) DBG | found existing default KVM network
	I0210 12:05:26.314141  632952 main.go:141] libmachine: (addons-234038) DBG | I0210 12:05:26.313991  632976 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001151f0}
	I0210 12:05:26.314185  632952 main.go:141] libmachine: (addons-234038) DBG | created network xml: 
	I0210 12:05:26.314203  632952 main.go:141] libmachine: (addons-234038) DBG | <network>
	I0210 12:05:26.314214  632952 main.go:141] libmachine: (addons-234038) DBG |   <name>mk-addons-234038</name>
	I0210 12:05:26.314221  632952 main.go:141] libmachine: (addons-234038) DBG |   <dns enable='no'/>
	I0210 12:05:26.314230  632952 main.go:141] libmachine: (addons-234038) DBG |   
	I0210 12:05:26.314239  632952 main.go:141] libmachine: (addons-234038) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0210 12:05:26.314248  632952 main.go:141] libmachine: (addons-234038) DBG |     <dhcp>
	I0210 12:05:26.314256  632952 main.go:141] libmachine: (addons-234038) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0210 12:05:26.314283  632952 main.go:141] libmachine: (addons-234038) DBG |     </dhcp>
	I0210 12:05:26.314309  632952 main.go:141] libmachine: (addons-234038) DBG |   </ip>
	I0210 12:05:26.314319  632952 main.go:141] libmachine: (addons-234038) DBG |   
	I0210 12:05:26.314329  632952 main.go:141] libmachine: (addons-234038) DBG | </network>
	I0210 12:05:26.314342  632952 main.go:141] libmachine: (addons-234038) DBG | 
	I0210 12:05:26.319839  632952 main.go:141] libmachine: (addons-234038) DBG | trying to create private KVM network mk-addons-234038 192.168.39.0/24...
	I0210 12:05:26.387740  632952 main.go:141] libmachine: (addons-234038) DBG | private KVM network mk-addons-234038 192.168.39.0/24 created
	I0210 12:05:26.387775  632952 main.go:141] libmachine: (addons-234038) setting up store path in /home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038 ...
	I0210 12:05:26.387791  632952 main.go:141] libmachine: (addons-234038) DBG | I0210 12:05:26.387730  632976 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20383-625153/.minikube
	I0210 12:05:26.387809  632952 main.go:141] libmachine: (addons-234038) building disk image from file:///home/jenkins/minikube-integration/20383-625153/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0210 12:05:26.387889  632952 main.go:141] libmachine: (addons-234038) Downloading /home/jenkins/minikube-integration/20383-625153/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20383-625153/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0210 12:05:26.678687  632952 main.go:141] libmachine: (addons-234038) DBG | I0210 12:05:26.678509  632976 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa...
	I0210 12:05:26.818764  632952 main.go:141] libmachine: (addons-234038) DBG | I0210 12:05:26.818604  632976 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/addons-234038.rawdisk...
	I0210 12:05:26.818801  632952 main.go:141] libmachine: (addons-234038) DBG | Writing magic tar header
	I0210 12:05:26.818816  632952 main.go:141] libmachine: (addons-234038) DBG | Writing SSH key tar header
	I0210 12:05:26.818828  632952 main.go:141] libmachine: (addons-234038) DBG | I0210 12:05:26.818727  632976 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038 ...
	I0210 12:05:26.819215  632952 main.go:141] libmachine: (addons-234038) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038
	I0210 12:05:26.819384  632952 main.go:141] libmachine: (addons-234038) setting executable bit set on /home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038 (perms=drwx------)
	I0210 12:05:26.819415  632952 main.go:141] libmachine: (addons-234038) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20383-625153/.minikube/machines
	I0210 12:05:26.819435  632952 main.go:141] libmachine: (addons-234038) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20383-625153/.minikube
	I0210 12:05:26.819451  632952 main.go:141] libmachine: (addons-234038) setting executable bit set on /home/jenkins/minikube-integration/20383-625153/.minikube/machines (perms=drwxr-xr-x)
	I0210 12:05:26.819461  632952 main.go:141] libmachine: (addons-234038) setting executable bit set on /home/jenkins/minikube-integration/20383-625153/.minikube (perms=drwxr-xr-x)
	I0210 12:05:26.819483  632952 main.go:141] libmachine: (addons-234038) setting executable bit set on /home/jenkins/minikube-integration/20383-625153 (perms=drwxrwxr-x)
	I0210 12:05:26.819496  632952 main.go:141] libmachine: (addons-234038) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0210 12:05:26.819510  632952 main.go:141] libmachine: (addons-234038) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20383-625153
	I0210 12:05:26.819525  632952 main.go:141] libmachine: (addons-234038) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0210 12:05:26.819564  632952 main.go:141] libmachine: (addons-234038) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0210 12:05:26.819584  632952 main.go:141] libmachine: (addons-234038) DBG | checking permissions on dir: /home/jenkins
	I0210 12:05:26.820066  632952 main.go:141] libmachine: (addons-234038) creating domain...
	I0210 12:05:26.820083  632952 main.go:141] libmachine: (addons-234038) DBG | checking permissions on dir: /home
	I0210 12:05:26.820095  632952 main.go:141] libmachine: (addons-234038) DBG | skipping /home - not owner
	I0210 12:05:26.821406  632952 main.go:141] libmachine: (addons-234038) define libvirt domain using xml: 
	I0210 12:05:26.821435  632952 main.go:141] libmachine: (addons-234038) <domain type='kvm'>
	I0210 12:05:26.821443  632952 main.go:141] libmachine: (addons-234038)   <name>addons-234038</name>
	I0210 12:05:26.821452  632952 main.go:141] libmachine: (addons-234038)   <memory unit='MiB'>4000</memory>
	I0210 12:05:26.821461  632952 main.go:141] libmachine: (addons-234038)   <vcpu>2</vcpu>
	I0210 12:05:26.821468  632952 main.go:141] libmachine: (addons-234038)   <features>
	I0210 12:05:26.821479  632952 main.go:141] libmachine: (addons-234038)     <acpi/>
	I0210 12:05:26.821488  632952 main.go:141] libmachine: (addons-234038)     <apic/>
	I0210 12:05:26.821511  632952 main.go:141] libmachine: (addons-234038)     <pae/>
	I0210 12:05:26.821520  632952 main.go:141] libmachine: (addons-234038)     
	I0210 12:05:26.821549  632952 main.go:141] libmachine: (addons-234038)   </features>
	I0210 12:05:26.821574  632952 main.go:141] libmachine: (addons-234038)   <cpu mode='host-passthrough'>
	I0210 12:05:26.821589  632952 main.go:141] libmachine: (addons-234038)   
	I0210 12:05:26.821604  632952 main.go:141] libmachine: (addons-234038)   </cpu>
	I0210 12:05:26.821615  632952 main.go:141] libmachine: (addons-234038)   <os>
	I0210 12:05:26.821623  632952 main.go:141] libmachine: (addons-234038)     <type>hvm</type>
	I0210 12:05:26.821634  632952 main.go:141] libmachine: (addons-234038)     <boot dev='cdrom'/>
	I0210 12:05:26.821644  632952 main.go:141] libmachine: (addons-234038)     <boot dev='hd'/>
	I0210 12:05:26.821660  632952 main.go:141] libmachine: (addons-234038)     <bootmenu enable='no'/>
	I0210 12:05:26.821670  632952 main.go:141] libmachine: (addons-234038)   </os>
	I0210 12:05:26.821677  632952 main.go:141] libmachine: (addons-234038)   <devices>
	I0210 12:05:26.821692  632952 main.go:141] libmachine: (addons-234038)     <disk type='file' device='cdrom'>
	I0210 12:05:26.821707  632952 main.go:141] libmachine: (addons-234038)       <source file='/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/boot2docker.iso'/>
	I0210 12:05:26.821718  632952 main.go:141] libmachine: (addons-234038)       <target dev='hdc' bus='scsi'/>
	I0210 12:05:26.821730  632952 main.go:141] libmachine: (addons-234038)       <readonly/>
	I0210 12:05:26.821740  632952 main.go:141] libmachine: (addons-234038)     </disk>
	I0210 12:05:26.821750  632952 main.go:141] libmachine: (addons-234038)     <disk type='file' device='disk'>
	I0210 12:05:26.821766  632952 main.go:141] libmachine: (addons-234038)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0210 12:05:26.821802  632952 main.go:141] libmachine: (addons-234038)       <source file='/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/addons-234038.rawdisk'/>
	I0210 12:05:26.821828  632952 main.go:141] libmachine: (addons-234038)       <target dev='hda' bus='virtio'/>
	I0210 12:05:26.821842  632952 main.go:141] libmachine: (addons-234038)     </disk>
	I0210 12:05:26.821856  632952 main.go:141] libmachine: (addons-234038)     <interface type='network'>
	I0210 12:05:26.821875  632952 main.go:141] libmachine: (addons-234038)       <source network='mk-addons-234038'/>
	I0210 12:05:26.821888  632952 main.go:141] libmachine: (addons-234038)       <model type='virtio'/>
	I0210 12:05:26.821900  632952 main.go:141] libmachine: (addons-234038)     </interface>
	I0210 12:05:26.821910  632952 main.go:141] libmachine: (addons-234038)     <interface type='network'>
	I0210 12:05:26.821917  632952 main.go:141] libmachine: (addons-234038)       <source network='default'/>
	I0210 12:05:26.821927  632952 main.go:141] libmachine: (addons-234038)       <model type='virtio'/>
	I0210 12:05:26.821939  632952 main.go:141] libmachine: (addons-234038)     </interface>
	I0210 12:05:26.821950  632952 main.go:141] libmachine: (addons-234038)     <serial type='pty'>
	I0210 12:05:26.821961  632952 main.go:141] libmachine: (addons-234038)       <target port='0'/>
	I0210 12:05:26.821972  632952 main.go:141] libmachine: (addons-234038)     </serial>
	I0210 12:05:26.821994  632952 main.go:141] libmachine: (addons-234038)     <console type='pty'>
	I0210 12:05:26.822012  632952 main.go:141] libmachine: (addons-234038)       <target type='serial' port='0'/>
	I0210 12:05:26.822024  632952 main.go:141] libmachine: (addons-234038)     </console>
	I0210 12:05:26.822031  632952 main.go:141] libmachine: (addons-234038)     <rng model='virtio'>
	I0210 12:05:26.822044  632952 main.go:141] libmachine: (addons-234038)       <backend model='random'>/dev/random</backend>
	I0210 12:05:26.822054  632952 main.go:141] libmachine: (addons-234038)     </rng>
	I0210 12:05:26.822066  632952 main.go:141] libmachine: (addons-234038)     
	I0210 12:05:26.822074  632952 main.go:141] libmachine: (addons-234038)     
	I0210 12:05:26.822090  632952 main.go:141] libmachine: (addons-234038)   </devices>
	I0210 12:05:26.822107  632952 main.go:141] libmachine: (addons-234038) </domain>
	I0210 12:05:26.822121  632952 main.go:141] libmachine: (addons-234038) 
	I0210 12:05:26.826417  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:f5:24:8b in network default
	I0210 12:05:26.827052  632952 main.go:141] libmachine: (addons-234038) starting domain...
	I0210 12:05:26.827074  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:26.827081  632952 main.go:141] libmachine: (addons-234038) ensuring networks are active...
	I0210 12:05:26.827690  632952 main.go:141] libmachine: (addons-234038) Ensuring network default is active
	I0210 12:05:26.827977  632952 main.go:141] libmachine: (addons-234038) Ensuring network mk-addons-234038 is active
	I0210 12:05:26.828429  632952 main.go:141] libmachine: (addons-234038) getting domain XML...
	I0210 12:05:26.829157  632952 main.go:141] libmachine: (addons-234038) creating domain...
	I0210 12:05:28.041677  632952 main.go:141] libmachine: (addons-234038) waiting for IP...
	I0210 12:05:28.042501  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:28.042919  632952 main.go:141] libmachine: (addons-234038) DBG | unable to find current IP address of domain addons-234038 in network mk-addons-234038
	I0210 12:05:28.042988  632952 main.go:141] libmachine: (addons-234038) DBG | I0210 12:05:28.042917  632976 retry.go:31] will retry after 211.397909ms: waiting for domain to come up
	I0210 12:05:28.256458  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:28.256965  632952 main.go:141] libmachine: (addons-234038) DBG | unable to find current IP address of domain addons-234038 in network mk-addons-234038
	I0210 12:05:28.257005  632952 main.go:141] libmachine: (addons-234038) DBG | I0210 12:05:28.256925  632976 retry.go:31] will retry after 330.458395ms: waiting for domain to come up
	I0210 12:05:28.589580  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:28.590101  632952 main.go:141] libmachine: (addons-234038) DBG | unable to find current IP address of domain addons-234038 in network mk-addons-234038
	I0210 12:05:28.590130  632952 main.go:141] libmachine: (addons-234038) DBG | I0210 12:05:28.590080  632976 retry.go:31] will retry after 358.245765ms: waiting for domain to come up
	I0210 12:05:28.949795  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:28.950334  632952 main.go:141] libmachine: (addons-234038) DBG | unable to find current IP address of domain addons-234038 in network mk-addons-234038
	I0210 12:05:28.950363  632952 main.go:141] libmachine: (addons-234038) DBG | I0210 12:05:28.950316  632976 retry.go:31] will retry after 516.198073ms: waiting for domain to come up
	I0210 12:05:29.468047  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:29.468529  632952 main.go:141] libmachine: (addons-234038) DBG | unable to find current IP address of domain addons-234038 in network mk-addons-234038
	I0210 12:05:29.468558  632952 main.go:141] libmachine: (addons-234038) DBG | I0210 12:05:29.468485  632976 retry.go:31] will retry after 714.630258ms: waiting for domain to come up
	I0210 12:05:30.184502  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:30.184986  632952 main.go:141] libmachine: (addons-234038) DBG | unable to find current IP address of domain addons-234038 in network mk-addons-234038
	I0210 12:05:30.185026  632952 main.go:141] libmachine: (addons-234038) DBG | I0210 12:05:30.184975  632976 retry.go:31] will retry after 742.371249ms: waiting for domain to come up
	I0210 12:05:30.928544  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:30.929032  632952 main.go:141] libmachine: (addons-234038) DBG | unable to find current IP address of domain addons-234038 in network mk-addons-234038
	I0210 12:05:30.929063  632952 main.go:141] libmachine: (addons-234038) DBG | I0210 12:05:30.929002  632976 retry.go:31] will retry after 736.513038ms: waiting for domain to come up
	I0210 12:05:31.667220  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:31.667621  632952 main.go:141] libmachine: (addons-234038) DBG | unable to find current IP address of domain addons-234038 in network mk-addons-234038
	I0210 12:05:31.667662  632952 main.go:141] libmachine: (addons-234038) DBG | I0210 12:05:31.667580  632976 retry.go:31] will retry after 923.894877ms: waiting for domain to come up
	I0210 12:05:32.592789  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:32.593253  632952 main.go:141] libmachine: (addons-234038) DBG | unable to find current IP address of domain addons-234038 in network mk-addons-234038
	I0210 12:05:32.593284  632952 main.go:141] libmachine: (addons-234038) DBG | I0210 12:05:32.593206  632976 retry.go:31] will retry after 1.305765103s: waiting for domain to come up
	I0210 12:05:33.900857  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:33.901321  632952 main.go:141] libmachine: (addons-234038) DBG | unable to find current IP address of domain addons-234038 in network mk-addons-234038
	I0210 12:05:33.901344  632952 main.go:141] libmachine: (addons-234038) DBG | I0210 12:05:33.901287  632976 retry.go:31] will retry after 1.630461331s: waiting for domain to come up
	I0210 12:05:35.533864  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:35.534362  632952 main.go:141] libmachine: (addons-234038) DBG | unable to find current IP address of domain addons-234038 in network mk-addons-234038
	I0210 12:05:35.534427  632952 main.go:141] libmachine: (addons-234038) DBG | I0210 12:05:35.534355  632976 retry.go:31] will retry after 2.563132736s: waiting for domain to come up
	I0210 12:05:38.100885  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:38.101313  632952 main.go:141] libmachine: (addons-234038) DBG | unable to find current IP address of domain addons-234038 in network mk-addons-234038
	I0210 12:05:38.101342  632952 main.go:141] libmachine: (addons-234038) DBG | I0210 12:05:38.101273  632976 retry.go:31] will retry after 2.649105353s: waiting for domain to come up
	I0210 12:05:40.751735  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:40.752156  632952 main.go:141] libmachine: (addons-234038) DBG | unable to find current IP address of domain addons-234038 in network mk-addons-234038
	I0210 12:05:40.752190  632952 main.go:141] libmachine: (addons-234038) DBG | I0210 12:05:40.752102  632976 retry.go:31] will retry after 4.125892958s: waiting for domain to come up
	I0210 12:05:44.882221  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:44.882554  632952 main.go:141] libmachine: (addons-234038) DBG | unable to find current IP address of domain addons-234038 in network mk-addons-234038
	I0210 12:05:44.882579  632952 main.go:141] libmachine: (addons-234038) DBG | I0210 12:05:44.882543  632976 retry.go:31] will retry after 5.405398513s: waiting for domain to come up
	I0210 12:05:50.289289  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:50.289760  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has current primary IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:50.289782  632952 main.go:141] libmachine: (addons-234038) found domain IP: 192.168.39.247
	I0210 12:05:50.289790  632952 main.go:141] libmachine: (addons-234038) reserving static IP address...
	I0210 12:05:50.290268  632952 main.go:141] libmachine: (addons-234038) DBG | unable to find host DHCP lease matching {name: "addons-234038", mac: "52:54:00:1f:e4:b4", ip: "192.168.39.247"} in network mk-addons-234038
	I0210 12:05:50.368769  632952 main.go:141] libmachine: (addons-234038) reserved static IP address 192.168.39.247 for domain addons-234038
	I0210 12:05:50.368804  632952 main.go:141] libmachine: (addons-234038) DBG | Getting to WaitForSSH function...
	I0210 12:05:50.368812  632952 main.go:141] libmachine: (addons-234038) waiting for SSH...
	I0210 12:05:50.371877  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:50.372276  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:05:50.372308  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:50.372444  632952 main.go:141] libmachine: (addons-234038) DBG | Using SSH client type: external
	I0210 12:05:50.372469  632952 main.go:141] libmachine: (addons-234038) DBG | Using SSH private key: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa (-rw-------)
	I0210 12:05:50.372515  632952 main.go:141] libmachine: (addons-234038) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 12:05:50.372530  632952 main.go:141] libmachine: (addons-234038) DBG | About to run SSH command:
	I0210 12:05:50.372562  632952 main.go:141] libmachine: (addons-234038) DBG | exit 0
	I0210 12:05:50.497027  632952 main.go:141] libmachine: (addons-234038) DBG | SSH cmd err, output: <nil>: 
	I0210 12:05:50.497346  632952 main.go:141] libmachine: (addons-234038) KVM machine creation complete
	I0210 12:05:50.497731  632952 main.go:141] libmachine: (addons-234038) Calling .GetConfigRaw
	I0210 12:05:50.498362  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:05:50.498622  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:05:50.498814  632952 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0210 12:05:50.498829  632952 main.go:141] libmachine: (addons-234038) Calling .GetState
	I0210 12:05:50.500147  632952 main.go:141] libmachine: Detecting operating system of created instance...
	I0210 12:05:50.500165  632952 main.go:141] libmachine: Waiting for SSH to be available...
	I0210 12:05:50.500173  632952 main.go:141] libmachine: Getting to WaitForSSH function...
	I0210 12:05:50.500181  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:05:50.502381  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:50.502745  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:05:50.502775  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:50.502904  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:05:50.503079  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:05:50.503249  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:05:50.503406  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:05:50.503589  632952 main.go:141] libmachine: Using SSH client type: native
	I0210 12:05:50.503837  632952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0210 12:05:50.503849  632952 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0210 12:05:50.608004  632952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 12:05:50.608031  632952 main.go:141] libmachine: Detecting the provisioner...
	I0210 12:05:50.608038  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:05:50.610796  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:50.611107  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:05:50.611131  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:50.611375  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:05:50.611572  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:05:50.611744  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:05:50.611901  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:05:50.612041  632952 main.go:141] libmachine: Using SSH client type: native
	I0210 12:05:50.612277  632952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0210 12:05:50.612294  632952 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0210 12:05:50.716963  632952 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0210 12:05:50.717026  632952 main.go:141] libmachine: found compatible host: buildroot
	I0210 12:05:50.717032  632952 main.go:141] libmachine: Provisioning with buildroot...
	I0210 12:05:50.717049  632952 main.go:141] libmachine: (addons-234038) Calling .GetMachineName
	I0210 12:05:50.717317  632952 buildroot.go:166] provisioning hostname "addons-234038"
	I0210 12:05:50.717349  632952 main.go:141] libmachine: (addons-234038) Calling .GetMachineName
	I0210 12:05:50.717581  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:05:50.720156  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:50.720557  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:05:50.720587  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:50.720711  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:05:50.720881  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:05:50.721007  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:05:50.721156  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:05:50.721332  632952 main.go:141] libmachine: Using SSH client type: native
	I0210 12:05:50.721508  632952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0210 12:05:50.721520  632952 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-234038 && echo "addons-234038" | sudo tee /etc/hostname
	I0210 12:05:50.838215  632952 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-234038
	
	I0210 12:05:50.838254  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:05:50.841343  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:50.841714  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:05:50.841741  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:50.841880  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:05:50.842114  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:05:50.842268  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:05:50.842382  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:05:50.842545  632952 main.go:141] libmachine: Using SSH client type: native
	I0210 12:05:50.842731  632952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0210 12:05:50.842746  632952 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-234038' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-234038/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-234038' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 12:05:50.957367  632952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 12:05:50.957402  632952 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20383-625153/.minikube CaCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20383-625153/.minikube}
	I0210 12:05:50.957453  632952 buildroot.go:174] setting up certificates
	I0210 12:05:50.957467  632952 provision.go:84] configureAuth start
	I0210 12:05:50.957481  632952 main.go:141] libmachine: (addons-234038) Calling .GetMachineName
	I0210 12:05:50.957835  632952 main.go:141] libmachine: (addons-234038) Calling .GetIP
	I0210 12:05:50.960417  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:50.960856  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:05:50.960888  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:50.961024  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:05:50.963402  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:50.963756  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:05:50.963784  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:50.963882  632952 provision.go:143] copyHostCerts
	I0210 12:05:50.963983  632952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem (1082 bytes)
	I0210 12:05:50.964132  632952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem (1123 bytes)
	I0210 12:05:50.964199  632952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem (1675 bytes)
	I0210 12:05:50.964248  632952 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem org=jenkins.addons-234038 san=[127.0.0.1 192.168.39.247 addons-234038 localhost minikube]
	I0210 12:05:51.160978  632952 provision.go:177] copyRemoteCerts
	I0210 12:05:51.161052  632952 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 12:05:51.161092  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:05:51.163800  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:51.164130  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:05:51.164159  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:51.164333  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:05:51.164533  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:05:51.164689  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:05:51.164787  632952 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa Username:docker}
	I0210 12:05:51.247707  632952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 12:05:51.269629  632952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0210 12:05:51.291183  632952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 12:05:51.312607  632952 provision.go:87] duration metric: took 355.123022ms to configureAuth
	I0210 12:05:51.312638  632952 buildroot.go:189] setting minikube options for container-runtime
	I0210 12:05:51.312856  632952 config.go:182] Loaded profile config "addons-234038": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 12:05:51.312956  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:05:51.315847  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:51.316217  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:05:51.316248  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:51.316409  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:05:51.316623  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:05:51.316808  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:05:51.316968  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:05:51.317165  632952 main.go:141] libmachine: Using SSH client type: native
	I0210 12:05:51.317398  632952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0210 12:05:51.317414  632952 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 12:05:51.545371  632952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 12:05:51.545406  632952 main.go:141] libmachine: Checking connection to Docker...
	I0210 12:05:51.545415  632952 main.go:141] libmachine: (addons-234038) Calling .GetURL
	I0210 12:05:51.546714  632952 main.go:141] libmachine: (addons-234038) DBG | using libvirt version 6000000
	I0210 12:05:51.549083  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:51.549473  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:05:51.549501  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:51.549702  632952 main.go:141] libmachine: Docker is up and running!
	I0210 12:05:51.549722  632952 main.go:141] libmachine: Reticulating splines...
	I0210 12:05:51.549732  632952 client.go:171] duration metric: took 25.734423848s to LocalClient.Create
	I0210 12:05:51.549765  632952 start.go:167] duration metric: took 25.734502159s to libmachine.API.Create "addons-234038"
	I0210 12:05:51.549779  632952 start.go:293] postStartSetup for "addons-234038" (driver="kvm2")
	I0210 12:05:51.549793  632952 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 12:05:51.549818  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:05:51.550045  632952 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 12:05:51.550070  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:05:51.552021  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:51.552364  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:05:51.552392  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:51.552495  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:05:51.552673  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:05:51.552817  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:05:51.552949  632952 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa Username:docker}
	I0210 12:05:51.634809  632952 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 12:05:51.638572  632952 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 12:05:51.638600  632952 filesync.go:126] Scanning /home/jenkins/minikube-integration/20383-625153/.minikube/addons for local assets ...
	I0210 12:05:51.638667  632952 filesync.go:126] Scanning /home/jenkins/minikube-integration/20383-625153/.minikube/files for local assets ...
	I0210 12:05:51.638692  632952 start.go:296] duration metric: took 88.905581ms for postStartSetup
	I0210 12:05:51.638736  632952 main.go:141] libmachine: (addons-234038) Calling .GetConfigRaw
	I0210 12:05:51.639456  632952 main.go:141] libmachine: (addons-234038) Calling .GetIP
	I0210 12:05:51.641986  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:51.642339  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:05:51.642371  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:51.642577  632952 profile.go:143] Saving config to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/config.json ...
	I0210 12:05:51.642746  632952 start.go:128] duration metric: took 25.847227176s to createHost
	I0210 12:05:51.642769  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:05:51.645041  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:51.645406  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:05:51.645453  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:51.645586  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:05:51.645764  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:05:51.645929  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:05:51.646062  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:05:51.646217  632952 main.go:141] libmachine: Using SSH client type: native
	I0210 12:05:51.646373  632952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0210 12:05:51.646383  632952 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 12:05:51.753667  632952 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739189151.732818083
	
	I0210 12:05:51.753703  632952 fix.go:216] guest clock: 1739189151.732818083
	I0210 12:05:51.753715  632952 fix.go:229] Guest: 2025-02-10 12:05:51.732818083 +0000 UTC Remote: 2025-02-10 12:05:51.642758189 +0000 UTC m=+25.958973095 (delta=90.059894ms)
	I0210 12:05:51.753767  632952 fix.go:200] guest clock delta is within tolerance: 90.059894ms
	I0210 12:05:51.753778  632952 start.go:83] releasing machines lock for "addons-234038", held for 25.958349157s
	I0210 12:05:51.753817  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:05:51.754107  632952 main.go:141] libmachine: (addons-234038) Calling .GetIP
	I0210 12:05:51.756799  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:51.757192  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:05:51.757220  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:51.757437  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:05:51.758021  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:05:51.758210  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:05:51.758333  632952 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 12:05:51.758394  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:05:51.758442  632952 ssh_runner.go:195] Run: cat /version.json
	I0210 12:05:51.758471  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:05:51.761067  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:51.761146  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:51.761460  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:05:51.761486  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:51.761511  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:05:51.761522  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:51.761675  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:05:51.761684  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:05:51.761919  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:05:51.761934  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:05:51.762090  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:05:51.762104  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:05:51.762291  632952 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa Username:docker}
	I0210 12:05:51.762298  632952 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa Username:docker}
	I0210 12:05:51.856276  632952 ssh_runner.go:195] Run: systemctl --version
	I0210 12:05:51.861756  632952 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 12:05:52.019416  632952 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 12:05:52.025036  632952 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 12:05:52.025126  632952 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 12:05:52.041264  632952 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 12:05:52.041285  632952 start.go:495] detecting cgroup driver to use...
	I0210 12:05:52.041351  632952 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 12:05:52.057498  632952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 12:05:52.070527  632952 docker.go:217] disabling cri-docker service (if available) ...
	I0210 12:05:52.070578  632952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 12:05:52.083083  632952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 12:05:52.095645  632952 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 12:05:52.206443  632952 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 12:05:52.354317  632952 docker.go:233] disabling docker service ...
	I0210 12:05:52.354391  632952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 12:05:52.367525  632952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 12:05:52.379280  632952 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 12:05:52.496050  632952 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 12:05:52.608588  632952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 12:05:52.621857  632952 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 12:05:52.638404  632952 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0210 12:05:52.638477  632952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 12:05:52.647873  632952 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 12:05:52.647949  632952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 12:05:52.657275  632952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 12:05:52.666341  632952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 12:05:52.675432  632952 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 12:05:52.684590  632952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 12:05:52.693839  632952 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 12:05:52.708928  632952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 12:05:52.718084  632952 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 12:05:52.726337  632952 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 12:05:52.726410  632952 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 12:05:52.737436  632952 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 12:05:52.745759  632952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:05:52.853655  632952 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 12:05:52.943946  632952 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 12:05:52.944075  632952 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 12:05:52.948393  632952 start.go:563] Will wait 60s for crictl version
	I0210 12:05:52.948473  632952 ssh_runner.go:195] Run: which crictl
	I0210 12:05:52.951913  632952 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 12:05:52.992275  632952 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 12:05:52.992399  632952 ssh_runner.go:195] Run: crio --version
	I0210 12:05:53.020285  632952 ssh_runner.go:195] Run: crio --version
	I0210 12:05:53.049600  632952 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0210 12:05:53.050847  632952 main.go:141] libmachine: (addons-234038) Calling .GetIP
	I0210 12:05:53.053370  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:53.053734  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:05:53.053766  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:05:53.053946  632952 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0210 12:05:53.057788  632952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 12:05:53.069579  632952 kubeadm.go:883] updating cluster {Name:addons-234038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-234038 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 12:05:53.069755  632952 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 12:05:53.069823  632952 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 12:05:53.103592  632952 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0210 12:05:53.103671  632952 ssh_runner.go:195] Run: which lz4
	I0210 12:05:53.107562  632952 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 12:05:53.111544  632952 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 12:05:53.111576  632952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0210 12:05:54.267808  632952 crio.go:462] duration metric: took 1.160281441s to copy over tarball
	I0210 12:05:54.267894  632952 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 12:05:56.340844  632952 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.072913768s)
	I0210 12:05:56.340885  632952 crio.go:469] duration metric: took 2.073040284s to extract the tarball
	I0210 12:05:56.340925  632952 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 12:05:56.377628  632952 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 12:05:56.416975  632952 crio.go:514] all images are preloaded for cri-o runtime.
	I0210 12:05:56.417011  632952 cache_images.go:84] Images are preloaded, skipping loading
	I0210 12:05:56.417023  632952 kubeadm.go:934] updating node { 192.168.39.247 8443 v1.32.1 crio true true} ...
	I0210 12:05:56.417240  632952 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-234038 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.247
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-234038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 12:05:56.417321  632952 ssh_runner.go:195] Run: crio config
	I0210 12:05:56.459390  632952 cni.go:84] Creating CNI manager for ""
	I0210 12:05:56.459419  632952 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 12:05:56.459431  632952 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 12:05:56.459455  632952 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.247 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-234038 NodeName:addons-234038 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 12:05:56.459580  632952 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.247
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-234038"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.247"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.247"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 12:05:56.459645  632952 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 12:05:56.468923  632952 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 12:05:56.469008  632952 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 12:05:56.477839  632952 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0210 12:05:56.493012  632952 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 12:05:56.508350  632952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0210 12:05:56.523364  632952 ssh_runner.go:195] Run: grep 192.168.39.247	control-plane.minikube.internal$ /etc/hosts
	I0210 12:05:56.526904  632952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.247	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 12:05:56.537718  632952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:05:56.662648  632952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 12:05:56.678838  632952 certs.go:68] Setting up /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038 for IP: 192.168.39.247
	I0210 12:05:56.678874  632952 certs.go:194] generating shared ca certs ...
	I0210 12:05:56.678899  632952 certs.go:226] acquiring lock for ca certs: {Name:mkf2f72f82a6bd7e3c16bb224cd26b80c3c89e0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:05:56.679117  632952 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key
	I0210 12:05:56.886891  632952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt ...
	I0210 12:05:56.886926  632952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt: {Name:mk8ed19feed6c131935c95fda8dc78c17a28bf93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:05:56.887150  632952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key ...
	I0210 12:05:56.887169  632952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key: {Name:mk3b8c0c3ae0fb53303ac2eeb701490f4afd6690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:05:56.887293  632952 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key
	I0210 12:05:56.998276  632952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.crt ...
	I0210 12:05:56.998307  632952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.crt: {Name:mk4f26c5d85f4678cdc80f21399af8c128883ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:05:56.998512  632952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key ...
	I0210 12:05:56.998530  632952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key: {Name:mkfb5441e1ff18666e147e029a6a1e3fa61f4906 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:05:56.998661  632952 certs.go:256] generating profile certs ...
	I0210 12:05:56.998746  632952 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.key
	I0210 12:05:56.998766  632952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt with IP's: []
	I0210 12:05:57.140723  632952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt ...
	I0210 12:05:57.140755  632952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: {Name:mkb8efcae5f6986be5fd0b9b7d9f9f9f0a14b724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:05:57.140952  632952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.key ...
	I0210 12:05:57.140969  632952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.key: {Name:mkae3af60fc6d1b37c4b2095d295a6bf940938d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:05:57.141070  632952 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/apiserver.key.251650ce
	I0210 12:05:57.141100  632952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/apiserver.crt.251650ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.247]
	I0210 12:05:57.323327  632952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/apiserver.crt.251650ce ...
	I0210 12:05:57.323369  632952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/apiserver.crt.251650ce: {Name:mkde5a5d95a5646888ee34825765d75781252738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:05:57.323580  632952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/apiserver.key.251650ce ...
	I0210 12:05:57.323600  632952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/apiserver.key.251650ce: {Name:mk7307f01d877da7225966760e2731c2ac83de60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:05:57.323707  632952 certs.go:381] copying /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/apiserver.crt.251650ce -> /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/apiserver.crt
	I0210 12:05:57.323788  632952 certs.go:385] copying /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/apiserver.key.251650ce -> /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/apiserver.key
	I0210 12:05:57.323835  632952 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/proxy-client.key
	I0210 12:05:57.323854  632952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/proxy-client.crt with IP's: []
	I0210 12:05:57.382740  632952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/proxy-client.crt ...
	I0210 12:05:57.382773  632952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/proxy-client.crt: {Name:mk6cc4017a2ae1b8b66518d4bb532c49e56ea9d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:05:57.382964  632952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/proxy-client.key ...
	I0210 12:05:57.382981  632952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/proxy-client.key: {Name:mkb8b6f4f1444c0781fa6b6636e7bb35c1c07f0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:05:57.383188  632952 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem (1675 bytes)
	I0210 12:05:57.383227  632952 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem (1082 bytes)
	I0210 12:05:57.383256  632952 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem (1123 bytes)
	I0210 12:05:57.383282  632952 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem (1675 bytes)
	I0210 12:05:57.383883  632952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 12:05:57.411039  632952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 12:05:57.434250  632952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 12:05:57.466192  632952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 12:05:57.488111  632952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0210 12:05:57.510308  632952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 12:05:57.532335  632952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 12:05:57.553789  632952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 12:05:57.575012  632952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 12:05:57.596091  632952 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 12:05:57.610797  632952 ssh_runner.go:195] Run: openssl version
	I0210 12:05:57.616210  632952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 12:05:57.625772  632952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:05:57.629717  632952 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:05:57.629767  632952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:05:57.635658  632952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 12:05:57.649076  632952 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 12:05:57.652681  632952 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 12:05:57.652743  632952 kubeadm.go:392] StartCluster: {Name:addons-234038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-234038 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:05:57.652832  632952 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 12:05:57.652898  632952 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 12:05:57.687339  632952 cri.go:89] found id: ""
	I0210 12:05:57.687438  632952 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 12:05:57.696785  632952 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 12:05:57.705535  632952 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 12:05:57.714059  632952 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 12:05:57.714079  632952 kubeadm.go:157] found existing configuration files:
	
	I0210 12:05:57.714131  632952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 12:05:57.722122  632952 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 12:05:57.722178  632952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 12:05:57.730510  632952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 12:05:57.738422  632952 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 12:05:57.738470  632952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 12:05:57.746684  632952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 12:05:57.754569  632952 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 12:05:57.754618  632952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 12:05:57.762952  632952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 12:05:57.770953  632952 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 12:05:57.771003  632952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 12:05:57.779308  632952 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 12:05:57.826633  632952 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0210 12:05:57.826688  632952 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 12:05:57.927368  632952 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 12:05:57.927488  632952 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 12:05:57.927578  632952 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0210 12:05:57.937975  632952 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 12:05:57.941167  632952 out.go:235]   - Generating certificates and keys ...
	I0210 12:05:57.941272  632952 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 12:05:57.941342  632952 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 12:05:58.039008  632952 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0210 12:05:58.192857  632952 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0210 12:05:58.357352  632952 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0210 12:05:58.558254  632952 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0210 12:05:58.785800  632952 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0210 12:05:58.785979  632952 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-234038 localhost] and IPs [192.168.39.247 127.0.0.1 ::1]
	I0210 12:05:58.994770  632952 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0210 12:05:58.995033  632952 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-234038 localhost] and IPs [192.168.39.247 127.0.0.1 ::1]
	I0210 12:05:59.249198  632952 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0210 12:05:59.624702  632952 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0210 12:05:59.750171  632952 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0210 12:05:59.750301  632952 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 12:05:59.944758  632952 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 12:06:00.380169  632952 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0210 12:06:00.680359  632952 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 12:06:00.928069  632952 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 12:06:01.012437  632952 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 12:06:01.012947  632952 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 12:06:01.015416  632952 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 12:06:01.064644  632952 out.go:235]   - Booting up control plane ...
	I0210 12:06:01.064858  632952 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 12:06:01.064997  632952 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 12:06:01.065127  632952 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 12:06:01.065282  632952 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 12:06:01.065425  632952 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 12:06:01.065481  632952 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 12:06:01.176591  632952 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0210 12:06:01.176745  632952 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0210 12:06:02.178722  632952 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003106679s
	I0210 12:06:02.178854  632952 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0210 12:06:06.682548  632952 kubeadm.go:310] [api-check] The API server is healthy after 4.504309573s
	I0210 12:06:06.696889  632952 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0210 12:06:06.711579  632952 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0210 12:06:06.746797  632952 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0210 12:06:06.747063  632952 kubeadm.go:310] [mark-control-plane] Marking the node addons-234038 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0210 12:06:06.759891  632952 kubeadm.go:310] [bootstrap-token] Using token: owi7vg.z33860j8g8ethnx3
	I0210 12:06:06.761094  632952 out.go:235]   - Configuring RBAC rules ...
	I0210 12:06:06.761276  632952 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0210 12:06:06.771533  632952 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0210 12:06:06.780551  632952 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0210 12:06:06.783780  632952 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0210 12:06:06.787484  632952 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0210 12:06:06.795820  632952 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0210 12:06:07.087371  632952 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0210 12:06:07.510269  632952 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0210 12:06:08.086469  632952 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0210 12:06:08.087122  632952 kubeadm.go:310] 
	I0210 12:06:08.087231  632952 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0210 12:06:08.087251  632952 kubeadm.go:310] 
	I0210 12:06:08.087370  632952 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0210 12:06:08.087382  632952 kubeadm.go:310] 
	I0210 12:06:08.087442  632952 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0210 12:06:08.087530  632952 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0210 12:06:08.087606  632952 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0210 12:06:08.087617  632952 kubeadm.go:310] 
	I0210 12:06:08.087684  632952 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0210 12:06:08.087692  632952 kubeadm.go:310] 
	I0210 12:06:08.087761  632952 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0210 12:06:08.087769  632952 kubeadm.go:310] 
	I0210 12:06:08.087844  632952 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0210 12:06:08.087939  632952 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0210 12:06:08.088013  632952 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0210 12:06:08.088024  632952 kubeadm.go:310] 
	I0210 12:06:08.088127  632952 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0210 12:06:08.088242  632952 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0210 12:06:08.088258  632952 kubeadm.go:310] 
	I0210 12:06:08.088365  632952 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token owi7vg.z33860j8g8ethnx3 \
	I0210 12:06:08.088461  632952 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:37d4bace26002796dd310d86a55ac47153684aa943b1e8f0eb361864e8edcaff \
	I0210 12:06:08.088481  632952 kubeadm.go:310] 	--control-plane 
	I0210 12:06:08.088486  632952 kubeadm.go:310] 
	I0210 12:06:08.088555  632952 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0210 12:06:08.088565  632952 kubeadm.go:310] 
	I0210 12:06:08.088681  632952 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token owi7vg.z33860j8g8ethnx3 \
	I0210 12:06:08.088804  632952 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:37d4bace26002796dd310d86a55ac47153684aa943b1e8f0eb361864e8edcaff 
	I0210 12:06:08.089399  632952 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 12:06:08.089447  632952 cni.go:84] Creating CNI manager for ""
	I0210 12:06:08.089459  632952 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 12:06:08.091963  632952 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0210 12:06:08.093250  632952 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0210 12:06:08.105210  632952 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0210 12:06:08.121673  632952 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 12:06:08.121770  632952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:06:08.121851  632952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-234038 minikube.k8s.io/updated_at=2025_02_10T12_06_08_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=ef65fd9d75393231710a2bc61f2cab58e3e6ecb2 minikube.k8s.io/name=addons-234038 minikube.k8s.io/primary=true
	I0210 12:06:08.274452  632952 ops.go:34] apiserver oom_adj: -16
	I0210 12:06:08.274496  632952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:06:08.774875  632952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:06:09.274568  632952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:06:09.775408  632952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:06:10.275525  632952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:06:10.775336  632952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:06:11.274583  632952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:06:11.775027  632952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:06:12.275321  632952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:06:12.355238  632952 kubeadm.go:1113] duration metric: took 4.233542256s to wait for elevateKubeSystemPrivileges
	I0210 12:06:12.355293  632952 kubeadm.go:394] duration metric: took 14.702554396s to StartCluster
	I0210 12:06:12.355325  632952 settings.go:142] acquiring lock: {Name:mk4bd8331d641665e48ff1d1c4382f2e915609be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:06:12.355500  632952 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 12:06:12.356104  632952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/kubeconfig: {Name:mke7ef1ff4ff1259856291979fdd0337df3a08b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:06:12.356319  632952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0210 12:06:12.356326  632952 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 12:06:12.356408  632952 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0210 12:06:12.356544  632952 addons.go:69] Setting yakd=true in profile "addons-234038"
	I0210 12:06:12.356548  632952 addons.go:69] Setting default-storageclass=true in profile "addons-234038"
	I0210 12:06:12.356564  632952 config.go:182] Loaded profile config "addons-234038": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 12:06:12.356566  632952 addons.go:69] Setting cloud-spanner=true in profile "addons-234038"
	I0210 12:06:12.356590  632952 addons.go:238] Setting addon yakd=true in "addons-234038"
	I0210 12:06:12.356601  632952 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-234038"
	I0210 12:06:12.356610  632952 addons.go:238] Setting addon cloud-spanner=true in "addons-234038"
	I0210 12:06:12.356619  632952 addons.go:69] Setting registry=true in profile "addons-234038"
	I0210 12:06:12.356631  632952 addons.go:69] Setting ingress=true in profile "addons-234038"
	I0210 12:06:12.356637  632952 host.go:66] Checking if "addons-234038" exists ...
	I0210 12:06:12.356639  632952 addons.go:238] Setting addon registry=true in "addons-234038"
	I0210 12:06:12.356634  632952 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-234038"
	I0210 12:06:12.356649  632952 addons.go:238] Setting addon ingress=true in "addons-234038"
	I0210 12:06:12.356651  632952 host.go:66] Checking if "addons-234038" exists ...
	I0210 12:06:12.356651  632952 addons.go:69] Setting storage-provisioner=true in profile "addons-234038"
	I0210 12:06:12.356658  632952 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-234038"
	I0210 12:06:12.356665  632952 host.go:66] Checking if "addons-234038" exists ...
	I0210 12:06:12.356673  632952 addons.go:238] Setting addon storage-provisioner=true in "addons-234038"
	I0210 12:06:12.356694  632952 host.go:66] Checking if "addons-234038" exists ...
	I0210 12:06:12.356709  632952 host.go:66] Checking if "addons-234038" exists ...
	I0210 12:06:12.356635  632952 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-234038"
	I0210 12:06:12.356802  632952 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-234038"
	I0210 12:06:12.356824  632952 host.go:66] Checking if "addons-234038" exists ...
	I0210 12:06:12.357155  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.357163  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.357176  632952 addons.go:69] Setting volcano=true in profile "addons-234038"
	I0210 12:06:12.357185  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.357194  632952 addons.go:238] Setting addon volcano=true in "addons-234038"
	I0210 12:06:12.357195  632952 addons.go:69] Setting ingress-dns=true in profile "addons-234038"
	I0210 12:06:12.357197  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.357207  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.357209  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.357216  632952 host.go:66] Checking if "addons-234038" exists ...
	I0210 12:06:12.357200  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.357225  632952 addons.go:69] Setting metrics-server=true in profile "addons-234038"
	I0210 12:06:12.357235  632952 addons.go:238] Setting addon metrics-server=true in "addons-234038"
	I0210 12:06:12.357261  632952 host.go:66] Checking if "addons-234038" exists ...
	I0210 12:06:12.357156  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.357276  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.357285  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.357295  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.357454  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.357472  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.357529  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.357551  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.357586  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.357621  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.357641  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.357216  632952 addons.go:69] Setting inspektor-gadget=true in profile "addons-234038"
	I0210 12:06:12.357715  632952 addons.go:69] Setting volumesnapshots=true in profile "addons-234038"
	I0210 12:06:12.357725  632952 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-234038"
	I0210 12:06:12.357735  632952 addons.go:238] Setting addon volumesnapshots=true in "addons-234038"
	I0210 12:06:12.356599  632952 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-234038"
	I0210 12:06:12.357717  632952 addons.go:238] Setting addon inspektor-gadget=true in "addons-234038"
	I0210 12:06:12.357761  632952 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-234038"
	I0210 12:06:12.357779  632952 host.go:66] Checking if "addons-234038" exists ...
	I0210 12:06:12.357792  632952 host.go:66] Checking if "addons-234038" exists ...
	I0210 12:06:12.357824  632952 host.go:66] Checking if "addons-234038" exists ...
	I0210 12:06:12.357917  632952 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-234038"
	I0210 12:06:12.357961  632952 host.go:66] Checking if "addons-234038" exists ...
	I0210 12:06:12.358179  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.358211  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.358273  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.358317  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.357207  632952 addons.go:238] Setting addon ingress-dns=true in "addons-234038"
	I0210 12:06:12.358338  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.358361  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.356622  632952 addons.go:69] Setting gcp-auth=true in profile "addons-234038"
	I0210 12:06:12.358541  632952 mustload.go:65] Loading cluster: addons-234038
	I0210 12:06:12.358213  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.358589  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.358730  632952 config.go:182] Loaded profile config "addons-234038": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 12:06:12.357218  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.358369  632952 host.go:66] Checking if "addons-234038" exists ...
	I0210 12:06:12.357696  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.362551  632952 out.go:177] * Verifying Kubernetes components...
	I0210 12:06:12.364417  632952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:06:12.378390  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I0210 12:06:12.378592  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43693
	I0210 12:06:12.378710  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I0210 12:06:12.378789  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34957
	I0210 12:06:12.378933  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32991
	I0210 12:06:12.378978  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43805
	I0210 12:06:12.379116  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.379168  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.379257  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.379311  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.379356  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.379808  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.379832  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.379995  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.380020  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.380153  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.380166  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.380309  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.380321  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.380374  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.380510  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.380522  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.380575  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.380624  632952 main.go:141] libmachine: (addons-234038) Calling .GetState
	I0210 12:06:12.380664  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.380697  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.380907  632952 main.go:141] libmachine: (addons-234038) Calling .GetState
	I0210 12:06:12.380975  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.381261  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46815
	I0210 12:06:12.385618  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.385668  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.386218  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.386244  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.386450  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.386490  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.386618  632952 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-234038"
	I0210 12:06:12.386650  632952 host.go:66] Checking if "addons-234038" exists ...
	I0210 12:06:12.386797  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.386833  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.386922  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.386945  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.387005  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.387041  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.387843  632952 addons.go:238] Setting addon default-storageclass=true in "addons-234038"
	I0210 12:06:12.387886  632952 host.go:66] Checking if "addons-234038" exists ...
	I0210 12:06:12.387955  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.388396  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.388450  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.388656  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.388681  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.389196  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.389281  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.389836  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.389864  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.390372  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.409213  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36343
	I0210 12:06:12.409798  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.410315  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.410346  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.410789  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.411366  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.411399  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.412893  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38515
	I0210 12:06:12.413381  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.413927  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.413954  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.414295  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.414898  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.414957  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.419415  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33009
	I0210 12:06:12.420072  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.420603  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.420621  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.421009  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.421206  632952 main.go:141] libmachine: (addons-234038) Calling .GetState
	I0210 12:06:12.421880  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.421917  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.422448  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.422492  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.423242  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:06:12.425247  632952 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0210 12:06:12.428231  632952 out.go:177]   - Using image docker.io/registry:2.8.3
	I0210 12:06:12.429698  632952 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0210 12:06:12.429727  632952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0210 12:06:12.429753  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:06:12.429884  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0210 12:06:12.430529  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45173
	I0210 12:06:12.430702  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38709
	I0210 12:06:12.430780  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41789
	I0210 12:06:12.430949  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45551
	I0210 12:06:12.431561  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.431611  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.431884  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.432061  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.432087  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.432264  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.432287  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.432636  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.432685  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.432771  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.432801  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.432818  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.432889  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.433298  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.433352  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.434500  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.434527  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.434591  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.434655  632952 main.go:141] libmachine: (addons-234038) Calling .GetState
	I0210 12:06:12.434799  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.434817  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.435262  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.435309  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.435893  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.436271  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.436817  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.436862  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.437063  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:06:12.437448  632952 main.go:141] libmachine: (addons-234038) Calling .GetState
	I0210 12:06:12.439016  632952 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0210 12:06:12.439565  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.440318  632952 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0210 12:06:12.440340  632952 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0210 12:06:12.440374  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:06:12.440867  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:06:12.440889  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.440995  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:06:12.441456  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:06:12.441698  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:06:12.441999  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:06:12.442327  632952 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa Username:docker}
	I0210 12:06:12.442526  632952 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0210 12:06:12.443650  632952 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0210 12:06:12.443669  632952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0210 12:06:12.443687  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:06:12.444121  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.445203  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:06:12.445646  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.446143  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:06:12.446454  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:06:12.446752  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:06:12.446900  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.446946  632952 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa Username:docker}
	I0210 12:06:12.447650  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:06:12.447674  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.447820  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:06:12.447991  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:06:12.448156  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:06:12.448347  632952 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa Username:docker}
	I0210 12:06:12.451529  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0210 12:06:12.452056  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.452503  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36191
	I0210 12:06:12.453033  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.453593  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32909
	I0210 12:06:12.453939  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.453965  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.454276  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.454641  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.454932  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.454955  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.455225  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.455277  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.455524  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.455693  632952 main.go:141] libmachine: (addons-234038) Calling .GetState
	I0210 12:06:12.455812  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.455834  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.456382  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.458610  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38797
	I0210 12:06:12.458892  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41539
	I0210 12:06:12.459093  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.459139  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.459300  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.459817  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.460012  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.460034  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.460453  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.460479  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.460588  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.460843  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.460892  632952 main.go:141] libmachine: (addons-234038) Calling .GetState
	I0210 12:06:12.462055  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.462083  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.462613  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:06:12.463236  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:06:12.465031  632952 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0210 12:06:12.465092  632952 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 12:06:12.466451  632952 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 12:06:12.466472  632952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 12:06:12.466496  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:06:12.466607  632952 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0210 12:06:12.466615  632952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0210 12:06:12.466627  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:06:12.470280  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43343
	I0210 12:06:12.470506  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39643
	I0210 12:06:12.470798  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.470991  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.471085  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45281
	I0210 12:06:12.471336  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.471350  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.471518  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.471539  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.471876  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.471936  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.472134  632952 main.go:141] libmachine: (addons-234038) Calling .GetState
	I0210 12:06:12.472197  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.472374  632952 main.go:141] libmachine: (addons-234038) Calling .GetState
	I0210 12:06:12.472420  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.473005  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.473622  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.473641  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.474019  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.474276  632952 host.go:66] Checking if "addons-234038" exists ...
	I0210 12:06:12.474477  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:06:12.474505  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.474679  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.474724  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.474736  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:06:12.474752  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.474974  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.475017  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.475273  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:06:12.475341  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:06:12.475398  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:06:12.475442  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:06:12.475615  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:06:12.475663  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:06:12.476053  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:06:12.476122  632952 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa Username:docker}
	I0210 12:06:12.476589  632952 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa Username:docker}
	I0210 12:06:12.477272  632952 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0210 12:06:12.478536  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41541
	I0210 12:06:12.478874  632952 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0210 12:06:12.478892  632952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0210 12:06:12.478911  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:06:12.478998  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.479510  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.479531  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.479957  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.480242  632952 main.go:141] libmachine: (addons-234038) Calling .GetState
	I0210 12:06:12.482558  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:06:12.482634  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.483114  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:06:12.483144  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.483338  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:06:12.483404  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46647
	I0210 12:06:12.484041  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.484149  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:06:12.484186  632952 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0210 12:06:12.484259  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35863
	I0210 12:06:12.484415  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:06:12.484732  632952 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa Username:docker}
	I0210 12:06:12.485095  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.485124  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.485222  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.485650  632952 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0210 12:06:12.485671  632952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0210 12:06:12.485684  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41915
	I0210 12:06:12.485655  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.485689  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:06:12.486958  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:12.487004  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:12.487251  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.487796  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.487813  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.488198  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.488453  632952 main.go:141] libmachine: (addons-234038) Calling .GetState
	I0210 12:06:12.490489  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.490605  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:06:12.491179  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:06:12.491214  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.491412  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:06:12.491593  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:06:12.491775  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:06:12.491952  632952 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa Username:docker}
	I0210 12:06:12.492207  632952 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0210 12:06:12.493425  632952 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0210 12:06:12.493445  632952 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0210 12:06:12.493466  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:06:12.493588  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.493609  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.494117  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.494325  632952 main.go:141] libmachine: (addons-234038) Calling .GetState
	I0210 12:06:12.496178  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:06:12.496860  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.497321  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:06:12.497343  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.497524  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:06:12.497693  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:06:12.497842  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:06:12.497990  632952 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa Username:docker}
	I0210 12:06:12.498118  632952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0210 12:06:12.499307  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0210 12:06:12.499793  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.499916  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39579
	I0210 12:06:12.500287  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.500495  632952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0210 12:06:12.500624  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.500653  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.500791  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.500813  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.501149  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.501333  632952 main.go:141] libmachine: (addons-234038) Calling .GetState
	I0210 12:06:12.502330  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.502535  632952 main.go:141] libmachine: (addons-234038) Calling .GetState
	I0210 12:06:12.503062  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:06:12.503080  632952 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0210 12:06:12.503144  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35991
	I0210 12:06:12.503486  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.504024  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.504046  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.504378  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.504941  632952 main.go:141] libmachine: (addons-234038) Calling .GetState
	I0210 12:06:12.505157  632952 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0210 12:06:12.505276  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:06:12.505780  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:12.505799  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:12.505891  632952 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0210 12:06:12.505978  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:12.506330  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:12.506339  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:12.506351  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:12.506009  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:12.507771  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:12.507783  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:06:12.507795  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:12.507807  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	W0210 12:06:12.507916  632952 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0210 12:06:12.508140  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44597
	I0210 12:06:12.508498  632952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0210 12:06:12.508546  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.508562  632952 out.go:177]   - Using image docker.io/busybox:stable
	I0210 12:06:12.509470  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.509494  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.509556  632952 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0210 12:06:12.509888  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.510034  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:06:12.510888  632952 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0210 12:06:12.510909  632952 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0210 12:06:12.510932  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:06:12.510353  632952 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0210 12:06:12.510988  632952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0210 12:06:12.511000  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:06:12.511677  632952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0210 12:06:12.512843  632952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0210 12:06:12.513692  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I0210 12:06:12.514203  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.514712  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.514735  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.514793  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38631
	I0210 12:06:12.514922  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.515299  632952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0210 12:06:12.515309  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.515386  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.515441  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.515472  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:06:12.515487  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.515557  632952 main.go:141] libmachine: (addons-234038) Calling .GetState
	I0210 12:06:12.515689  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:06:12.515842  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:06:12.515872  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:06:12.515907  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.515972  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:06:12.516198  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:06:12.516253  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.516266  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.516321  632952 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa Username:docker}
	I0210 12:06:12.516370  632952 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0210 12:06:12.516380  632952 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0210 12:06:12.516390  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:06:12.516396  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:06:12.516637  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.516796  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:06:12.516801  632952 main.go:141] libmachine: (addons-234038) Calling .GetState
	I0210 12:06:12.516977  632952 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa Username:docker}
	I0210 12:06:12.518071  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:06:12.519106  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:06:12.519459  632952 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 12:06:12.519479  632952 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 12:06:12.519496  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:06:12.519757  632952 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0210 12:06:12.520347  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.520792  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:06:12.520824  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.520962  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:06:12.521190  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:06:12.521322  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:06:12.521522  632952 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa Username:docker}
	I0210 12:06:12.521839  632952 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0210 12:06:12.523444  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.523870  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:06:12.523883  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	W0210 12:06:12.523949  632952 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54822->192.168.39.247:22: read: connection reset by peer
	I0210 12:06:12.523978  632952 retry.go:31] will retry after 202.12976ms: ssh: handshake failed: read tcp 192.168.39.1:54822->192.168.39.247:22: read: connection reset by peer
	I0210 12:06:12.524134  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:06:12.524281  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:06:12.524356  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:06:12.524438  632952 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa Username:docker}
	I0210 12:06:12.525269  632952 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0210 12:06:12.526495  632952 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0210 12:06:12.526508  632952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0210 12:06:12.526521  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:06:12.529246  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.529681  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:06:12.529705  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.529868  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:06:12.529911  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42977
	I0210 12:06:12.530083  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:06:12.530195  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:06:12.530251  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:12.530327  632952 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa Username:docker}
	I0210 12:06:12.530833  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:12.530849  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:12.531263  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:12.531429  632952 main.go:141] libmachine: (addons-234038) Calling .GetState
	I0210 12:06:12.533036  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:06:12.534812  632952 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0210 12:06:12.535984  632952 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0210 12:06:12.536009  632952 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0210 12:06:12.536031  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:06:12.538824  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.539267  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:06:12.539296  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:12.539448  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:06:12.539619  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:06:12.539760  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:06:12.539921  632952 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa Username:docker}
	W0210 12:06:12.540538  632952 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54844->192.168.39.247:22: read: connection reset by peer
	I0210 12:06:12.540567  632952 retry.go:31] will retry after 237.166434ms: ssh: handshake failed: read tcp 192.168.39.1:54844->192.168.39.247:22: read: connection reset by peer
	I0210 12:06:12.920198  632952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0210 12:06:12.993776  632952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0210 12:06:13.015442  632952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0210 12:06:13.015580  632952 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0210 12:06:13.015601  632952 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0210 12:06:13.029479  632952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0210 12:06:13.068218  632952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0210 12:06:13.075883  632952 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0210 12:06:13.075901  632952 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0210 12:06:13.094329  632952 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0210 12:06:13.094364  632952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0210 12:06:13.101716  632952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0210 12:06:13.103025  632952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 12:06:13.110071  632952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 12:06:13.126298  632952 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0210 12:06:13.126319  632952 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0210 12:06:13.195258  632952 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0210 12:06:13.195286  632952 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0210 12:06:13.226039  632952 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0210 12:06:13.226073  632952 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0210 12:06:13.251454  632952 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0210 12:06:13.251482  632952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0210 12:06:13.257899  632952 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0210 12:06:13.257923  632952 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0210 12:06:13.267021  632952 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0210 12:06:13.267054  632952 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0210 12:06:13.271040  632952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 12:06:13.271243  632952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0210 12:06:13.314964  632952 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0210 12:06:13.314987  632952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0210 12:06:13.372310  632952 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0210 12:06:13.372350  632952 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0210 12:06:13.402677  632952 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0210 12:06:13.402714  632952 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0210 12:06:13.436257  632952 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 12:06:13.436284  632952 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0210 12:06:13.451449  632952 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0210 12:06:13.451488  632952 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0210 12:06:13.456287  632952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0210 12:06:13.475536  632952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0210 12:06:13.683021  632952 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0210 12:06:13.683053  632952 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0210 12:06:13.686026  632952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 12:06:13.695254  632952 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0210 12:06:13.695275  632952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0210 12:06:13.705367  632952 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0210 12:06:13.705400  632952 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0210 12:06:13.973859  632952 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0210 12:06:13.973885  632952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0210 12:06:14.011912  632952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0210 12:06:14.048336  632952 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0210 12:06:14.048372  632952 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0210 12:06:14.278340  632952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0210 12:06:14.379658  632952 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0210 12:06:14.379709  632952 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0210 12:06:14.750924  632952 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0210 12:06:14.750957  632952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0210 12:06:15.165125  632952 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0210 12:06:15.165163  632952 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0210 12:06:15.484136  632952 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0210 12:06:15.484174  632952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0210 12:06:15.765610  632952 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0210 12:06:15.765635  632952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0210 12:06:16.078527  632952 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0210 12:06:16.078563  632952 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0210 12:06:16.362438  632952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0210 12:06:18.024629  632952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.030808029s)
	I0210 12:06:18.024657  632952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.009185182s)
	I0210 12:06:18.024694  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:18.024710  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:18.024740  632952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.995224608s)
	I0210 12:06:18.024774  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:18.024789  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:18.024796  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:18.024810  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:18.024830  632952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.956580909s)
	I0210 12:06:18.024862  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:18.024873  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:18.025008  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:18.025028  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:18.025114  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:18.025149  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:18.025157  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:18.025165  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:18.025157  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:18.025172  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:18.025182  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:18.025193  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:18.025202  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:18.025317  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:18.025357  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:18.025371  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:18.025416  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:18.025443  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:18.025452  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:18.025461  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:18.025516  632952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.105290233s)
	I0210 12:06:18.025544  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:18.025553  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:18.025650  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:18.025657  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:18.025669  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:18.025765  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:18.025777  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:18.025785  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:18.025791  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:18.027240  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:18.027258  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:18.027271  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:18.027286  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:18.027246  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:18.027466  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:18.027503  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:18.027510  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:18.027602  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:18.027614  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:18.112270  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:18.112303  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:18.112604  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:18.112626  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:19.319053  632952 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0210 12:06:19.319099  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:06:19.322784  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:19.323297  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:06:19.323325  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:19.323510  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:06:19.323756  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:06:19.323914  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:06:19.324165  632952 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa Username:docker}
	I0210 12:06:19.689083  632952 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0210 12:06:19.912479  632952 addons.go:238] Setting addon gcp-auth=true in "addons-234038"
	I0210 12:06:19.912543  632952 host.go:66] Checking if "addons-234038" exists ...
	I0210 12:06:19.912849  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:19.912905  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:19.913625  632952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.810564346s)
	I0210 12:06:19.913678  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:19.913696  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:19.913705  632952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.803602965s)
	I0210 12:06:19.913756  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:19.913771  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:19.913761  632952 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.642689511s)
	I0210 12:06:19.913810  632952 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.642540033s)
	I0210 12:06:19.913829  632952 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0210 12:06:19.913913  632952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.45759706s)
	I0210 12:06:19.913946  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:19.913961  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:19.914065  632952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.438493117s)
	I0210 12:06:19.914092  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:19.914106  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:19.914218  632952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.228163002s)
	I0210 12:06:19.914244  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:19.914254  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:19.914339  632952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.902396113s)
	I0210 12:06:19.914372  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:19.914382  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:19.914530  632952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.636143527s)
	W0210 12:06:19.914594  632952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0210 12:06:19.914618  632952 retry.go:31] will retry after 232.772744ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0210 12:06:19.914972  632952 node_ready.go:35] waiting up to 6m0s for node "addons-234038" to be "Ready" ...
	I0210 12:06:19.917480  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:19.917487  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:19.917500  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:19.917510  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:19.917517  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:19.917574  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:19.917578  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:19.917594  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:19.917596  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:19.917600  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:19.917573  632952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.815831277s)
	I0210 12:06:19.917622  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:19.917622  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:19.917630  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:19.917632  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:19.917641  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:19.917647  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:19.917648  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:19.917670  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:19.917676  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:19.917683  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:19.917689  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:19.917799  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:19.917836  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:19.917869  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:19.917880  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:19.917886  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:19.917898  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:19.917907  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:19.917913  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:19.917603  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:19.918206  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:19.918751  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:19.918768  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:19.918781  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:19.918792  632952 addons.go:479] Verifying addon ingress=true in "addons-234038"
	I0210 12:06:19.918802  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:19.918810  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:19.919928  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:19.920000  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:19.920025  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:19.918773  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:19.917632  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:19.920076  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:19.920099  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:19.920118  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:19.920132  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:19.920005  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:19.920150  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:19.920162  632952 addons.go:479] Verifying addon registry=true in "addons-234038"
	I0210 12:06:19.920025  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:19.920304  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:19.920492  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:19.920532  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:19.920541  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:19.920549  632952 addons.go:479] Verifying addon metrics-server=true in "addons-234038"
	I0210 12:06:19.921246  632952 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-234038 service yakd-dashboard -n yakd-dashboard
	
	I0210 12:06:19.921983  632952 out.go:177] * Verifying registry addon...
	I0210 12:06:19.920045  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:19.922052  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:19.922746  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:19.922767  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:19.922756  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:19.922788  632952 out.go:177] * Verifying ingress addon...
	I0210 12:06:19.924159  632952 node_ready.go:49] node "addons-234038" has status "Ready":"True"
	I0210 12:06:19.924189  632952 node_ready.go:38] duration metric: took 9.189037ms for node "addons-234038" to be "Ready" ...
	I0210 12:06:19.924200  632952 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 12:06:19.924511  632952 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0210 12:06:19.925191  632952 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0210 12:06:19.931433  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43233
	I0210 12:06:19.931923  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:19.932530  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:19.932554  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:19.932910  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:19.933443  632952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:06:19.933488  632952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:06:19.948667  632952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35829
	I0210 12:06:19.949048  632952 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:06:19.949562  632952 main.go:141] libmachine: Using API Version  1
	I0210 12:06:19.949583  632952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:06:19.949949  632952 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:06:19.950166  632952 main.go:141] libmachine: (addons-234038) Calling .GetState
	I0210 12:06:19.951947  632952 main.go:141] libmachine: (addons-234038) Calling .DriverName
	I0210 12:06:19.952150  632952 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0210 12:06:19.952172  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHHostname
	I0210 12:06:19.955007  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:19.955457  632952 main.go:141] libmachine: (addons-234038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e4:b4", ip: ""} in network mk-addons-234038: {Iface:virbr1 ExpiryTime:2025-02-10 13:05:40 +0000 UTC Type:0 Mac:52:54:00:1f:e4:b4 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:addons-234038 Clientid:01:52:54:00:1f:e4:b4}
	I0210 12:06:19.955487  632952 main.go:141] libmachine: (addons-234038) DBG | domain addons-234038 has defined IP address 192.168.39.247 and MAC address 52:54:00:1f:e4:b4 in network mk-addons-234038
	I0210 12:06:19.955666  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHPort
	I0210 12:06:19.955850  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHKeyPath
	I0210 12:06:19.956034  632952 main.go:141] libmachine: (addons-234038) Calling .GetSSHUsername
	I0210 12:06:19.956142  632952 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/addons-234038/id_rsa Username:docker}
	I0210 12:06:19.965378  632952 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0210 12:06:19.965406  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:19.971053  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:19.971071  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:19.971303  632952 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0210 12:06:19.971330  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:19.971405  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:19.971428  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:19.971749  632952 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-lngvz" in "kube-system" namespace to be "Ready" ...
	I0210 12:06:20.147948  632952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0210 12:06:20.420552  632952 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-234038" context rescaled to 1 replicas
	I0210 12:06:20.427463  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:20.429059  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:20.930324  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:20.930366  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:21.429581  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:21.429779  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:21.955693  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:21.955746  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:22.019527  632952 pod_ready.go:103] pod "amd-gpu-device-plugin-lngvz" in "kube-system" namespace has status "Ready":"False"
	I0210 12:06:22.144998  632952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.782507262s)
	I0210 12:06:22.145050  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:22.145063  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:22.145199  632952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.997199249s)
	I0210 12:06:22.145260  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:22.145265  632952 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.193091473s)
	I0210 12:06:22.145402  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:22.145278  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:22.145367  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:22.145493  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:22.145513  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:22.145525  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:22.146940  632952 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0210 12:06:22.147473  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:22.147488  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:22.147502  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:22.147518  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:22.147530  632952 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-234038"
	I0210 12:06:22.147528  632952 main.go:141] libmachine: (addons-234038) DBG | Closing plugin on server side
	I0210 12:06:22.147533  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:22.147826  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:22.147840  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:22.148089  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:22.148104  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:22.149078  632952 out.go:177] * Verifying csi-hostpath-driver addon...
	I0210 12:06:22.149872  632952 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0210 12:06:22.151356  632952 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0210 12:06:22.151568  632952 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0210 12:06:22.151591  632952 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0210 12:06:22.177005  632952 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0210 12:06:22.177039  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:22.208585  632952 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0210 12:06:22.208622  632952 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0210 12:06:22.347343  632952 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0210 12:06:22.347386  632952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0210 12:06:22.412399  632952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0210 12:06:22.428793  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:22.429229  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:22.655439  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:22.929620  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:22.929687  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:23.154655  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:23.431309  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:23.431444  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:23.658829  632952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.246379003s)
	I0210 12:06:23.658901  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:23.658919  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:23.659370  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:23.659393  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:23.659405  632952 main.go:141] libmachine: Making call to close driver server
	I0210 12:06:23.659414  632952 main.go:141] libmachine: (addons-234038) Calling .Close
	I0210 12:06:23.659634  632952 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:06:23.659659  632952 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:06:23.660669  632952 addons.go:479] Verifying addon gcp-auth=true in "addons-234038"
	I0210 12:06:23.661968  632952 out.go:177] * Verifying gcp-auth addon...
	I0210 12:06:23.663763  632952 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0210 12:06:23.684670  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:23.701080  632952 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0210 12:06:23.701099  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:23.930311  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:23.930422  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:24.154591  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:24.166977  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:24.428930  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:24.428986  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:24.476466  632952 pod_ready.go:103] pod "amd-gpu-device-plugin-lngvz" in "kube-system" namespace has status "Ready":"False"
	I0210 12:06:24.654666  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:24.666622  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:24.927562  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:24.928758  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:25.154822  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:25.166828  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:25.428947  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:25.429016  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:25.654824  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:25.667450  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:25.929278  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:25.929462  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:26.154593  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:26.167005  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:26.429089  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:26.429242  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:26.477006  632952 pod_ready.go:103] pod "amd-gpu-device-plugin-lngvz" in "kube-system" namespace has status "Ready":"False"
	I0210 12:06:26.654934  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:26.666877  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:26.927821  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:26.928658  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:27.154320  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:27.166510  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:27.532763  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:27.532864  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:27.655696  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:27.666741  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:27.928437  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:27.928453  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:28.155856  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:28.167163  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:28.429808  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:28.429854  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:28.655275  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:28.666084  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:28.927882  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:28.927993  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:28.976384  632952 pod_ready.go:103] pod "amd-gpu-device-plugin-lngvz" in "kube-system" namespace has status "Ready":"False"
	I0210 12:06:29.154676  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:29.166853  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:29.429434  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:29.429619  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:29.654793  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:29.667019  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:29.928942  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:29.929114  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:30.154893  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:30.167311  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:30.429293  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:30.429369  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:30.655472  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:30.666881  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:30.928810  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:30.928997  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:30.976460  632952 pod_ready.go:103] pod "amd-gpu-device-plugin-lngvz" in "kube-system" namespace has status "Ready":"False"
	I0210 12:06:31.154491  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:31.166843  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:31.428732  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:31.431252  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:31.477273  632952 pod_ready.go:93] pod "amd-gpu-device-plugin-lngvz" in "kube-system" namespace has status "Ready":"True"
	I0210 12:06:31.477301  632952 pod_ready.go:82] duration metric: took 11.505527053s for pod "amd-gpu-device-plugin-lngvz" in "kube-system" namespace to be "Ready" ...
	I0210 12:06:31.477316  632952 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-kmd2z" in "kube-system" namespace to be "Ready" ...
	I0210 12:06:31.479518  632952 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-kmd2z" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-kmd2z" not found
	I0210 12:06:31.479543  632952 pod_ready.go:82] duration metric: took 2.218221ms for pod "coredns-668d6bf9bc-kmd2z" in "kube-system" namespace to be "Ready" ...
	E0210 12:06:31.479556  632952 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-kmd2z" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-kmd2z" not found
	I0210 12:06:31.479564  632952 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-zlwf7" in "kube-system" namespace to be "Ready" ...
	I0210 12:06:31.484700  632952 pod_ready.go:93] pod "coredns-668d6bf9bc-zlwf7" in "kube-system" namespace has status "Ready":"True"
	I0210 12:06:31.484721  632952 pod_ready.go:82] duration metric: took 5.150376ms for pod "coredns-668d6bf9bc-zlwf7" in "kube-system" namespace to be "Ready" ...
	I0210 12:06:31.484733  632952 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-234038" in "kube-system" namespace to be "Ready" ...
	I0210 12:06:31.489968  632952 pod_ready.go:93] pod "etcd-addons-234038" in "kube-system" namespace has status "Ready":"True"
	I0210 12:06:31.489992  632952 pod_ready.go:82] duration metric: took 5.252698ms for pod "etcd-addons-234038" in "kube-system" namespace to be "Ready" ...
	I0210 12:06:31.490002  632952 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-234038" in "kube-system" namespace to be "Ready" ...
	I0210 12:06:31.494503  632952 pod_ready.go:93] pod "kube-apiserver-addons-234038" in "kube-system" namespace has status "Ready":"True"
	I0210 12:06:31.494525  632952 pod_ready.go:82] duration metric: took 4.517317ms for pod "kube-apiserver-addons-234038" in "kube-system" namespace to be "Ready" ...
	I0210 12:06:31.494536  632952 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-234038" in "kube-system" namespace to be "Ready" ...
	I0210 12:06:31.654903  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:31.667102  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:31.674844  632952 pod_ready.go:93] pod "kube-controller-manager-addons-234038" in "kube-system" namespace has status "Ready":"True"
	I0210 12:06:31.674867  632952 pod_ready.go:82] duration metric: took 180.324425ms for pod "kube-controller-manager-addons-234038" in "kube-system" namespace to be "Ready" ...
	I0210 12:06:31.674881  632952 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-whfw2" in "kube-system" namespace to be "Ready" ...
	I0210 12:06:31.929127  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:31.929282  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:32.075789  632952 pod_ready.go:93] pod "kube-proxy-whfw2" in "kube-system" namespace has status "Ready":"True"
	I0210 12:06:32.075815  632952 pod_ready.go:82] duration metric: took 400.927745ms for pod "kube-proxy-whfw2" in "kube-system" namespace to be "Ready" ...
	I0210 12:06:32.075831  632952 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-234038" in "kube-system" namespace to be "Ready" ...
	I0210 12:06:32.155019  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:32.167887  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:32.429026  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:32.429055  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:32.475830  632952 pod_ready.go:93] pod "kube-scheduler-addons-234038" in "kube-system" namespace has status "Ready":"True"
	I0210 12:06:32.475862  632952 pod_ready.go:82] duration metric: took 400.021409ms for pod "kube-scheduler-addons-234038" in "kube-system" namespace to be "Ready" ...
	I0210 12:06:32.475877  632952 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-flrqb" in "kube-system" namespace to be "Ready" ...
	I0210 12:06:32.655425  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:32.666813  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:32.928049  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:32.928545  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:33.154593  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:33.167156  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:33.429981  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:33.430244  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:33.654783  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:33.667310  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:33.932381  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:33.932424  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:34.155280  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:34.167712  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:34.428213  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:34.428208  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:34.482092  632952 pod_ready.go:103] pod "metrics-server-7fbb699795-flrqb" in "kube-system" namespace has status "Ready":"False"
	I0210 12:06:34.655210  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:34.667070  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:34.929312  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:34.929457  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:35.155589  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:35.166910  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:35.427842  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:35.428822  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:35.655478  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:35.666377  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:35.929092  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:35.929238  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:36.155188  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:36.166894  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:36.429129  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:36.429303  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:36.655110  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:36.666049  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:36.928827  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:36.928828  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:36.981708  632952 pod_ready.go:103] pod "metrics-server-7fbb699795-flrqb" in "kube-system" namespace has status "Ready":"False"
	I0210 12:06:37.154648  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:37.167146  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:37.437244  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:37.437303  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:37.654803  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:37.666826  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:38.242426  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:38.242518  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:38.242561  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:38.242835  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:38.430060  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:38.430504  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:38.655007  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:38.666051  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:38.928301  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:38.929155  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:39.155215  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:39.166598  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:39.656623  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:39.656828  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:39.657308  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:39.662146  632952 pod_ready.go:103] pod "metrics-server-7fbb699795-flrqb" in "kube-system" namespace has status "Ready":"False"
	I0210 12:06:39.667844  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:39.928572  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:39.928742  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:40.155369  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:40.166472  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:40.430989  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:40.431447  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:40.655693  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:40.667007  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:40.928678  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:40.928920  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:41.155332  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:41.166621  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:41.430378  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:41.433341  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:41.654601  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:41.666937  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:41.928652  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:41.928757  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:41.982220  632952 pod_ready.go:103] pod "metrics-server-7fbb699795-flrqb" in "kube-system" namespace has status "Ready":"False"
	I0210 12:06:42.155315  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:42.168270  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:42.429142  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:42.429342  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:42.654901  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:42.666693  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:42.928622  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:42.928735  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:43.155082  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:43.166641  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:43.428410  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:43.428456  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:43.654992  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:43.666652  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:43.929358  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:43.929552  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:44.155373  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:44.167662  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:44.428596  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:44.429403  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:44.482759  632952 pod_ready.go:103] pod "metrics-server-7fbb699795-flrqb" in "kube-system" namespace has status "Ready":"False"
	I0210 12:06:44.654768  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:44.666919  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:44.928415  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:44.928432  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:45.154320  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:45.166604  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:45.428167  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:45.430531  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:45.655214  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:45.666306  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:45.928943  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:45.929193  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:46.155421  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:46.166782  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:46.534976  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:46.535965  632952 pod_ready.go:103] pod "metrics-server-7fbb699795-flrqb" in "kube-system" namespace has status "Ready":"False"
	I0210 12:06:46.536040  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:46.654885  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:46.667390  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:46.928833  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:46.929161  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:47.155622  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:47.167396  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:47.427425  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:47.430959  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:47.654831  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:47.667196  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:47.928547  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:47.928669  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:48.155947  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:48.165974  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:48.429581  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:48.429903  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:48.656446  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:48.666747  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:48.928690  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:48.928779  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:48.981867  632952 pod_ready.go:103] pod "metrics-server-7fbb699795-flrqb" in "kube-system" namespace has status "Ready":"False"
	I0210 12:06:49.155061  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:49.166503  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:49.427553  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:49.428028  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:49.655950  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:49.667747  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:49.928033  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:49.928492  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:50.154878  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:50.167031  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:50.428267  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:50.428467  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:50.655072  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:50.666719  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:50.929205  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:50.929249  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:50.982367  632952 pod_ready.go:103] pod "metrics-server-7fbb699795-flrqb" in "kube-system" namespace has status "Ready":"False"
	I0210 12:06:51.154901  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:51.167187  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:51.429023  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:51.429192  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:51.655626  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:51.667423  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:51.927782  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:51.928334  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:52.155657  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:52.167398  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:52.429144  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:52.429245  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:52.654835  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:52.666990  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:52.928784  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:52.928861  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:53.155864  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:53.167348  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:53.427995  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:53.429641  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:53.483915  632952 pod_ready.go:103] pod "metrics-server-7fbb699795-flrqb" in "kube-system" namespace has status "Ready":"False"
	I0210 12:06:53.655298  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:53.666856  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:53.927972  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:53.928178  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:54.155675  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:54.167086  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:54.429099  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:54.429276  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:54.657719  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:54.668099  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:54.930056  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:54.930060  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:55.155895  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:55.166549  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:55.432006  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:55.432285  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:55.654187  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:55.686642  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:55.930831  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:55.930998  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:55.981746  632952 pod_ready.go:103] pod "metrics-server-7fbb699795-flrqb" in "kube-system" namespace has status "Ready":"False"
	I0210 12:06:56.155067  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:56.166012  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:56.701955  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:56.804481  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:56.804593  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:56.804641  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:56.928666  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:56.928711  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:57.156384  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:57.166392  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:57.428141  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:57.429088  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:57.654300  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:57.666350  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:57.927440  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:57.928003  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:57.992383  632952 pod_ready.go:93] pod "metrics-server-7fbb699795-flrqb" in "kube-system" namespace has status "Ready":"True"
	I0210 12:06:57.992410  632952 pod_ready.go:82] duration metric: took 25.516525606s for pod "metrics-server-7fbb699795-flrqb" in "kube-system" namespace to be "Ready" ...
	I0210 12:06:57.992423  632952 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-g7hmw" in "kube-system" namespace to be "Ready" ...
	I0210 12:06:58.010647  632952 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-g7hmw" in "kube-system" namespace has status "Ready":"True"
	I0210 12:06:58.010677  632952 pod_ready.go:82] duration metric: took 18.245717ms for pod "nvidia-device-plugin-daemonset-g7hmw" in "kube-system" namespace to be "Ready" ...
	I0210 12:06:58.010699  632952 pod_ready.go:39] duration metric: took 38.086484869s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 12:06:58.010724  632952 api_server.go:52] waiting for apiserver process to appear ...
	I0210 12:06:58.010811  632952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:06:58.030395  632952 api_server.go:72] duration metric: took 45.674036994s to wait for apiserver process to appear ...
	I0210 12:06:58.030425  632952 api_server.go:88] waiting for apiserver healthz status ...
	I0210 12:06:58.030451  632952 api_server.go:253] Checking apiserver healthz at https://192.168.39.247:8443/healthz ...
	I0210 12:06:58.034881  632952 api_server.go:279] https://192.168.39.247:8443/healthz returned 200:
	ok
	I0210 12:06:58.035903  632952 api_server.go:141] control plane version: v1.32.1
	I0210 12:06:58.035932  632952 api_server.go:131] duration metric: took 5.498927ms to wait for apiserver health ...
	I0210 12:06:58.035942  632952 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 12:06:58.041082  632952 system_pods.go:59] 18 kube-system pods found
	I0210 12:06:58.041143  632952 system_pods.go:61] "amd-gpu-device-plugin-lngvz" [1849adde-c34e-49e5-9e0d-a90bd8296074] Running
	I0210 12:06:58.041153  632952 system_pods.go:61] "coredns-668d6bf9bc-zlwf7" [237090b7-3325-44dd-ba56-960c9f0e5498] Running
	I0210 12:06:58.041164  632952 system_pods.go:61] "csi-hostpath-attacher-0" [52526ae9-6955-4d29-893a-11c01a9ac90d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0210 12:06:58.041177  632952 system_pods.go:61] "csi-hostpath-resizer-0" [0b3851a5-203e-49c0-b159-5e1cc014c915] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0210 12:06:58.041188  632952 system_pods.go:61] "csi-hostpathplugin-hxb6s" [f4827625-8687-4ef1-ab55-010498bb569e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0210 12:06:58.041195  632952 system_pods.go:61] "etcd-addons-234038" [49180f11-b5c9-47c3-8dcf-550f5110b641] Running
	I0210 12:06:58.041205  632952 system_pods.go:61] "kube-apiserver-addons-234038" [cc9cb528-794f-4daf-8af6-f54850765c99] Running
	I0210 12:06:58.041214  632952 system_pods.go:61] "kube-controller-manager-addons-234038" [f2112d81-87d5-4759-9acf-b251029c2023] Running
	I0210 12:06:58.041223  632952 system_pods.go:61] "kube-ingress-dns-minikube" [e791c447-a669-44cd-aa10-6170ba473776] Running
	I0210 12:06:58.041229  632952 system_pods.go:61] "kube-proxy-whfw2" [8ceacc37-e5ce-4d69-868d-29d527e98de7] Running
	I0210 12:06:58.041237  632952 system_pods.go:61] "kube-scheduler-addons-234038" [eed4bf92-6637-4596-a147-674c056eaa2f] Running
	I0210 12:06:58.041245  632952 system_pods.go:61] "metrics-server-7fbb699795-flrqb" [de55d6ce-d3c9-49b5-8f24-e8d71b30fbf5] Running
	I0210 12:06:58.041250  632952 system_pods.go:61] "nvidia-device-plugin-daemonset-g7hmw" [74339db8-20ac-4fe3-b340-5d62da5d4a05] Running
	I0210 12:06:58.041258  632952 system_pods.go:61] "registry-6c88467877-8ks2s" [0ef45a11-4943-40d8-afeb-bfaa998618ef] Running
	I0210 12:06:58.041265  632952 system_pods.go:61] "registry-proxy-wj8c2" [ebbd9a4b-1ff2-4667-840f-d09153bb86fb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0210 12:06:58.041275  632952 system_pods.go:61] "snapshot-controller-68b874b76f-46j84" [6bb4c9ba-39b4-4471-bda5-8802ce0be34d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0210 12:06:58.041287  632952 system_pods.go:61] "snapshot-controller-68b874b76f-jvqrg" [8bc3b8c0-05f1-4a73-a4d1-28e8aa056257] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0210 12:06:58.041297  632952 system_pods.go:61] "storage-provisioner" [26e4fe0a-2296-4f6f-b153-5a4241ddcc5a] Running
	I0210 12:06:58.041304  632952 system_pods.go:74] duration metric: took 5.357363ms to wait for pod list to return data ...
	I0210 12:06:58.041318  632952 default_sa.go:34] waiting for default service account to be created ...
	I0210 12:06:58.043536  632952 default_sa.go:45] found service account: "default"
	I0210 12:06:58.043556  632952 default_sa.go:55] duration metric: took 2.229431ms for default service account to be created ...
	I0210 12:06:58.043566  632952 system_pods.go:116] waiting for k8s-apps to be running ...
	I0210 12:06:58.047500  632952 system_pods.go:86] 18 kube-system pods found
	I0210 12:06:58.047567  632952 system_pods.go:89] "amd-gpu-device-plugin-lngvz" [1849adde-c34e-49e5-9e0d-a90bd8296074] Running
	I0210 12:06:58.047592  632952 system_pods.go:89] "coredns-668d6bf9bc-zlwf7" [237090b7-3325-44dd-ba56-960c9f0e5498] Running
	I0210 12:06:58.047614  632952 system_pods.go:89] "csi-hostpath-attacher-0" [52526ae9-6955-4d29-893a-11c01a9ac90d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0210 12:06:58.047633  632952 system_pods.go:89] "csi-hostpath-resizer-0" [0b3851a5-203e-49c0-b159-5e1cc014c915] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0210 12:06:58.047658  632952 system_pods.go:89] "csi-hostpathplugin-hxb6s" [f4827625-8687-4ef1-ab55-010498bb569e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0210 12:06:58.047673  632952 system_pods.go:89] "etcd-addons-234038" [49180f11-b5c9-47c3-8dcf-550f5110b641] Running
	I0210 12:06:58.047689  632952 system_pods.go:89] "kube-apiserver-addons-234038" [cc9cb528-794f-4daf-8af6-f54850765c99] Running
	I0210 12:06:58.047704  632952 system_pods.go:89] "kube-controller-manager-addons-234038" [f2112d81-87d5-4759-9acf-b251029c2023] Running
	I0210 12:06:58.047724  632952 system_pods.go:89] "kube-ingress-dns-minikube" [e791c447-a669-44cd-aa10-6170ba473776] Running
	I0210 12:06:58.047738  632952 system_pods.go:89] "kube-proxy-whfw2" [8ceacc37-e5ce-4d69-868d-29d527e98de7] Running
	I0210 12:06:58.047752  632952 system_pods.go:89] "kube-scheduler-addons-234038" [eed4bf92-6637-4596-a147-674c056eaa2f] Running
	I0210 12:06:58.047767  632952 system_pods.go:89] "metrics-server-7fbb699795-flrqb" [de55d6ce-d3c9-49b5-8f24-e8d71b30fbf5] Running
	I0210 12:06:58.047787  632952 system_pods.go:89] "nvidia-device-plugin-daemonset-g7hmw" [74339db8-20ac-4fe3-b340-5d62da5d4a05] Running
	I0210 12:06:58.047801  632952 system_pods.go:89] "registry-6c88467877-8ks2s" [0ef45a11-4943-40d8-afeb-bfaa998618ef] Running
	I0210 12:06:58.047818  632952 system_pods.go:89] "registry-proxy-wj8c2" [ebbd9a4b-1ff2-4667-840f-d09153bb86fb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0210 12:06:58.047842  632952 system_pods.go:89] "snapshot-controller-68b874b76f-46j84" [6bb4c9ba-39b4-4471-bda5-8802ce0be34d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0210 12:06:58.047861  632952 system_pods.go:89] "snapshot-controller-68b874b76f-jvqrg" [8bc3b8c0-05f1-4a73-a4d1-28e8aa056257] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0210 12:06:58.047875  632952 system_pods.go:89] "storage-provisioner" [26e4fe0a-2296-4f6f-b153-5a4241ddcc5a] Running
	I0210 12:06:58.047892  632952 system_pods.go:126] duration metric: took 4.318677ms to wait for k8s-apps to be running ...
	I0210 12:06:58.047911  632952 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 12:06:58.047983  632952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 12:06:58.062210  632952 system_svc.go:56] duration metric: took 14.29371ms WaitForService to wait for kubelet
	I0210 12:06:58.062239  632952 kubeadm.go:582] duration metric: took 45.705887664s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 12:06:58.062271  632952 node_conditions.go:102] verifying NodePressure condition ...
	I0210 12:06:58.064672  632952 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:06:58.064694  632952 node_conditions.go:123] node cpu capacity is 2
	I0210 12:06:58.064707  632952 node_conditions.go:105] duration metric: took 2.428265ms to run NodePressure ...
	I0210 12:06:58.064719  632952 start.go:241] waiting for startup goroutines ...
	I0210 12:06:58.156219  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:58.167050  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:58.433507  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:58.433580  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:58.654525  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:58.666704  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:58.929058  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:58.929144  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:59.155195  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:59.166391  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:59.427375  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:59.428500  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:06:59.654400  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:06:59.666412  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:06:59.927507  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:06:59.928211  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:00.155352  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:00.166583  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:00.428542  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:07:00.429036  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:00.655271  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:00.666156  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:00.927779  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:07:00.928430  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:01.154333  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:01.167699  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:01.428233  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:01.428475  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:07:01.654976  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:01.666029  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:01.928910  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:01.929010  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:07:02.155275  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:02.166406  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:02.760463  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:02.760592  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:02.760641  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:07:02.760730  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:02.927532  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:07:02.928791  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:03.155353  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:03.166278  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:03.428150  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:07:03.431838  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:03.655054  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:03.666294  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:03.929220  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:07:03.929304  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:04.154576  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:04.166881  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:04.428301  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:07:04.429229  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:04.655802  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:04.666988  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:04.928660  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:04.928685  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:07:05.156178  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:05.166373  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:05.427432  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:07:05.428705  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:05.655041  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:05.666435  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:05.928935  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:07:05.929227  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:06.155546  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:06.167931  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:06.429202  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:07:06.429354  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:06.655806  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:06.666710  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:06.928393  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:07:06.928692  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:07.155414  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:07.166460  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:07.428550  632952 kapi.go:107] duration metric: took 47.504030482s to wait for kubernetes.io/minikube-addons=registry ...
	I0210 12:07:07.430216  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:07.655540  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:07.668129  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:07.928376  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:08.155402  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:08.166788  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:08.429566  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:08.654558  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:08.666777  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:08.929155  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:09.156279  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:09.166982  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:09.433026  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:09.655066  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:09.666764  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:09.928623  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:10.155445  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:10.166855  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:10.428858  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:10.655038  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:10.666924  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:10.928728  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:11.154936  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:11.166809  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:11.428709  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:11.655028  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:11.665907  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:11.928883  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:12.156983  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:12.166627  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:12.429427  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:12.654619  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:12.666735  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:13.142723  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:13.154649  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:13.182954  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:13.453895  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:13.655278  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:13.666145  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:13.928018  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:14.155000  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:14.167006  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:14.430707  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:14.654857  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:14.667266  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:14.928720  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:15.154588  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:15.200666  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:15.429271  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:15.655377  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:15.666476  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:15.927989  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:16.155333  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:16.167271  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:16.431117  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:16.655494  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:16.666588  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:16.928583  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:17.154591  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:17.166765  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:17.430797  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:17.655181  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:17.756798  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:17.928832  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:18.155493  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:18.166486  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:18.429725  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:18.655358  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:18.667024  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:18.928424  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:19.161049  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:19.171014  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:19.429516  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:19.654681  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:19.667381  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:19.928238  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:20.155553  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:20.166584  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:20.428555  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:20.653780  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:20.667342  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:20.929510  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:21.156143  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:21.167078  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:21.430394  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:21.655523  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:21.666790  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:21.929757  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:22.155307  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:22.166363  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:22.428841  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:22.655454  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:22.666671  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:22.929012  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:23.155256  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:23.166703  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:23.429359  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:23.654571  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:23.669477  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:23.930555  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:24.154708  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:24.166960  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:24.429438  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:24.654685  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:24.666715  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:24.928763  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:25.155230  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:25.166439  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:25.430392  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:25.655241  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:07:25.667517  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:25.928016  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:26.156940  632952 kapi.go:107] duration metric: took 1m4.005574287s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0210 12:07:26.168645  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:26.429865  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:26.668167  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:26.929737  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:27.167696  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:27.862632  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:27.863548  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:27.928291  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:28.167373  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:28.428712  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:28.667988  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:28.929382  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:29.167423  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:29.428593  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:29.667164  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:30.277644  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:30.281451  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:30.428867  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:30.666470  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:30.928757  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:31.166447  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:31.428912  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:31.667653  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:31.928948  632952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:07:32.166819  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:32.430460  632952 kapi.go:107] duration metric: took 1m12.505265184s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0210 12:07:32.667594  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:33.235286  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:33.667651  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:34.166802  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:34.668554  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:35.167531  632952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:07:35.666645  632952 kapi.go:107] duration metric: took 1m12.00287766s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0210 12:07:35.668346  632952 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-234038 cluster.
	I0210 12:07:35.669590  632952 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0210 12:07:35.670777  632952 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0210 12:07:35.672054  632952 out.go:177] * Enabled addons: ingress-dns, amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, storage-provisioner-rancher, inspektor-gadget, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0210 12:07:35.673143  632952 addons.go:514] duration metric: took 1m23.31672854s for enable addons: enabled=[ingress-dns amd-gpu-device-plugin cloud-spanner nvidia-device-plugin storage-provisioner-rancher inspektor-gadget storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0210 12:07:35.673204  632952 start.go:246] waiting for cluster config update ...
	I0210 12:07:35.673235  632952 start.go:255] writing updated cluster config ...
	I0210 12:07:35.673544  632952 ssh_runner.go:195] Run: rm -f paused
	I0210 12:07:35.725818  632952 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0210 12:07:35.727286  632952 out.go:177] * Done! kubectl is now configured to use "addons-234038" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.104026075Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1479fb81-f147-4bdd-a294-832c6d31afdd name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.104322644Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3baed9a946b1058d6692b7700858f13bdad8bdca271168e76ab1adb50dcfb1b8,PodSandboxId:b61092316de91da690bc63232e37b3a7065e3810efc2969aad3ad2e1d58bcd7b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d41a14a4ecff96bdae6253ad2f58d8f258786db438307846081e8d835b984111,State:CONTAINER_RUNNING,CreatedAt:1739189307696719334,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b817e0be-815e-46cb-8d43-a875a079b5d0,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd76bef250f9df52aba5ee261a72e43d9af1605cf8f2ff4038632496885ae83,PodSandboxId:619ce0114532b6ac29c3db4ca360a1f7efbdd4ea996c42c265b5c7c330ebe4f1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1739189258864551163,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 530c1b30-7cd8-4330-8f5a-bc8389728c98,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c5e0d8fcee975309ea7fd53c5b418508bd8dd20ba5fa57f1e04f15b33c74df,PodSandboxId:c468896c725fbf863a37443f93a10564ac26a0393e301cb6b1c86b9cdcd77191,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1739189251568768732,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-bjzkm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b82de956-10ad-411c-b93d-602ce7161def,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5e1c4350cc6808586e3f4aa74208f890dda8d2bc33f5aa03f580c14983b0c9e5,PodSandboxId:14d24b5742c28004b2084f0e2f3fa8312708e1861b6d1bf5569dbe749b384923,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1739189251460384047,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqjd4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8c65ca85-96ec-4c1b-aeaf-306d9e765948,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b7545854a4a271ce1913024048180b8106b36b2018f11d40e24e8fbf8483ce0,PodSandboxId:a3990c2125278552bdd4a584ad5fbbc6fc145378981c117894ab32177ec465fd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739189233434410840,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-rmj79,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 71c1fa45-2d4a-4a62-b9b6-4042fe03346f,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e220645bcc51b6818e3acf57cc5c6761c8de4cea59104429a7f0fa586fdc1d70,PodSandboxId:d9bb0a1439eba9ad316bed3d118e8a3a5ce7ed17e7871b9a22db24a878144054,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1739189190744439600,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lngvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1849adde-c34e-49e5-9e0d-a90bd8296074,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d038a23255be07b29a40e1ec5cd6dc6e027493414e95f7241c308acbcd77472,PodSandboxId:86b7d072594edc74b334a597c0855a810e74005d63ed08a7f2b15c833beee94e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1739189188152060638,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e791c447-a669-44cd-aa10-6170ba473776,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10448f633cbb121721da724f7968c9ec57da4173de760911b865d5c2d1f73b2a,PodSandboxId:c4c2d67b929ee8dc9078be72d7c4190bacfb3653f1fac4365df7541e9917b5ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739189178888394642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26e4fe0a-2296-4f6f-b153-5a4241ddcc5a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4a71ed677d62285513d92e6781de7fe6d43ac83414862702f7566641512bee0,PodSandboxId:72a3339a8672ee8ccfad04677007c18553e1c0464d30bf98010af29be43c1408,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739189175385495332,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zlwf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 237090b7-3325-44dd-ba56-960c9f0e5498,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85749f84c3fc2973ff009ba09e8511da5134fc3e44a95c421d531
8887834d98b,PodSandboxId:bf002dea9ce4fb95d1a2880ce2777b5e085616e6c65f7c5b9b4d666fdc3e5695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739189173165005600,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-whfw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ceacc37-e5ce-4d69-868d-29d527e98de7,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c92752e59c0d2f0ef1c62a4bebc4f3fd37d46693624376461b65c746f60dce72,PodSandboxId:342f2ebf
37c1c07ed8093bfe22c59ab31559ad0d9ae98ce412c3f02e8029eb17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739189162374563614,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-234038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17187e3b35407d571a7f9875d98ea3ee,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62d9159e89009d81ce220ca1c8a046ff914e8f083ecddbb05457702d7f5aafb1,PodSandboxId:52b4d214b78be461effde06d5
e38c0988dfe060cbcf9101b2df40454eda9ef07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739189162378311385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-234038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5dbb4cfbbd5d3ad13e63f202949b007,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:368deae37d5f386b063c29e008c3141625bc33adb45c28519583f14bfa7af022,PodSandboxId:7c562b2c159bf7b7e0d973f6a908aa3725b0cb01b2
4b400977ff76891db8c028,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739189162340055629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-234038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1cd4a90d70d56431ef149d14b86f52,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba67e090d35175d954f0e9e9aa37da817d73f15946b33668108107a960f16e6,PodSandboxId:ad8cf0f4e2aad339a5b055c1e288c1abf9ad495dc54bf838b810bf9a97ece9cf,Metadata:&ContainerMetad
ata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739189162273854450,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-234038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec735cdbe4b863dd4a6a820603186ae5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1479fb81-f147-4bdd-a294-832c6d31afdd name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.104797700Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.104957797Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.105568268Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 45456a45-5138-47aa-a6f9-1fd2658a3c2f,},},}" file="otel-collector/interceptors.go:62" id=eb8b77ef-d649-4e66-95f3-84028356dd4a name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.105760365Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ab61e602b8ce687febab1e51889632e8ff508b9682a2a3bc19bdb13d0d0a40a6,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-2qmkk,Uid:45456a45-5138-47aa-a6f9-1fd2658a3c2f,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739189447179141169,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-2qmkk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45456a45-5138-47aa-a6f9-1fd2658a3c2f,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-10T12:10:46.861270288Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=eb8b77ef-d649-4e66-95f3-84028356dd4a name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.106226892Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:ab61e602b8ce687febab1e51889632e8ff508b9682a2a3bc19bdb13d0d0a40a6,Verbose:false,}" file="otel-collector/interceptors.go:62" id=f4e443dc-6cb0-4b61-89ed-9da8d41aabf4 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.106318415Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:ab61e602b8ce687febab1e51889632e8ff508b9682a2a3bc19bdb13d0d0a40a6,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-2qmkk,Uid:45456a45-5138-47aa-a6f9-1fd2658a3c2f,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739189447179141169,Network:&PodSandboxNetworkStatus{Ip:10.244.0.33,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-2qmkk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45456a45-5138-47aa-a6f9-1fd2658a3c2f,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-10T12:10:46.861270288Z,kubernetes.io/config.source: api
,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=f4e443dc-6cb0-4b61-89ed-9da8d41aabf4 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.106652096Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 45456a45-5138-47aa-a6f9-1fd2658a3c2f,},},}" file="otel-collector/interceptors.go:62" id=9ade8686-9ce5-46f0-a7ca-50d191c91639 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.106703517Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ade8686-9ce5-46f0-a7ca-50d191c91639 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.106754108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9ade8686-9ce5-46f0-a7ca-50d191c91639 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.119601303Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc21043e-6aed-4b85-8985-0ef09d772f3a name=/runtime.v1.RuntimeService/Version
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.119664082Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc21043e-6aed-4b85-8985-0ef09d772f3a name=/runtime.v1.RuntimeService/Version
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.120821762Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a00c2472-8e9b-4971-9d7f-bba1aeb04b06 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.121948390Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739189448121929938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a00c2472-8e9b-4971-9d7f-bba1aeb04b06 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.122375870Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=557f25c9-e982-40ab-8c4f-4ad8ed99c51e name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.122425063Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=557f25c9-e982-40ab-8c4f-4ad8ed99c51e name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.122706218Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3baed9a946b1058d6692b7700858f13bdad8bdca271168e76ab1adb50dcfb1b8,PodSandboxId:b61092316de91da690bc63232e37b3a7065e3810efc2969aad3ad2e1d58bcd7b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d41a14a4ecff96bdae6253ad2f58d8f258786db438307846081e8d835b984111,State:CONTAINER_RUNNING,CreatedAt:1739189307696719334,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b817e0be-815e-46cb-8d43-a875a079b5d0,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd76bef250f9df52aba5ee261a72e43d9af1605cf8f2ff4038632496885ae83,PodSandboxId:619ce0114532b6ac29c3db4ca360a1f7efbdd4ea996c42c265b5c7c330ebe4f1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1739189258864551163,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 530c1b30-7cd8-4330-8f5a-bc8389728c98,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c5e0d8fcee975309ea7fd53c5b418508bd8dd20ba5fa57f1e04f15b33c74df,PodSandboxId:c468896c725fbf863a37443f93a10564ac26a0393e301cb6b1c86b9cdcd77191,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1739189251568768732,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-bjzkm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b82de956-10ad-411c-b93d-602ce7161def,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5e1c4350cc6808586e3f4aa74208f890dda8d2bc33f5aa03f580c14983b0c9e5,PodSandboxId:14d24b5742c28004b2084f0e2f3fa8312708e1861b6d1bf5569dbe749b384923,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1739189251460384047,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqjd4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8c65ca85-96ec-4c1b-aeaf-306d9e765948,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b7545854a4a271ce1913024048180b8106b36b2018f11d40e24e8fbf8483ce0,PodSandboxId:a3990c2125278552bdd4a584ad5fbbc6fc145378981c117894ab32177ec465fd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739189233434410840,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-rmj79,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 71c1fa45-2d4a-4a62-b9b6-4042fe03346f,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e220645bcc51b6818e3acf57cc5c6761c8de4cea59104429a7f0fa586fdc1d70,PodSandboxId:d9bb0a1439eba9ad316bed3d118e8a3a5ce7ed17e7871b9a22db24a878144054,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1739189190744439600,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lngvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1849adde-c34e-49e5-9e0d-a90bd8296074,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d038a23255be07b29a40e1ec5cd6dc6e027493414e95f7241c308acbcd77472,PodSandboxId:86b7d072594edc74b334a597c0855a810e74005d63ed08a7f2b15c833beee94e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1739189188152060638,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e791c447-a669-44cd-aa10-6170ba473776,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10448f633cbb121721da724f7968c9ec57da4173de760911b865d5c2d1f73b2a,PodSandboxId:c4c2d67b929ee8dc9078be72d7c4190bacfb3653f1fac4365df7541e9917b5ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739189178888394642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26e4fe0a-2296-4f6f-b153-5a4241ddcc5a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4a71ed677d62285513d92e6781de7fe6d43ac83414862702f7566641512bee0,PodSandboxId:72a3339a8672ee8ccfad04677007c18553e1c0464d30bf98010af29be43c1408,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739189175385495332,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zlwf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 237090b7-3325-44dd-ba56-960c9f0e5498,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85749f84c3fc2973ff009ba09e8511da5134fc3e44a95c421d531
8887834d98b,PodSandboxId:bf002dea9ce4fb95d1a2880ce2777b5e085616e6c65f7c5b9b4d666fdc3e5695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739189173165005600,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-whfw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ceacc37-e5ce-4d69-868d-29d527e98de7,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c92752e59c0d2f0ef1c62a4bebc4f3fd37d46693624376461b65c746f60dce72,PodSandboxId:342f2ebf
37c1c07ed8093bfe22c59ab31559ad0d9ae98ce412c3f02e8029eb17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739189162374563614,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-234038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17187e3b35407d571a7f9875d98ea3ee,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62d9159e89009d81ce220ca1c8a046ff914e8f083ecddbb05457702d7f5aafb1,PodSandboxId:52b4d214b78be461effde06d5
e38c0988dfe060cbcf9101b2df40454eda9ef07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739189162378311385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-234038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5dbb4cfbbd5d3ad13e63f202949b007,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:368deae37d5f386b063c29e008c3141625bc33adb45c28519583f14bfa7af022,PodSandboxId:7c562b2c159bf7b7e0d973f6a908aa3725b0cb01b2
4b400977ff76891db8c028,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739189162340055629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-234038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1cd4a90d70d56431ef149d14b86f52,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba67e090d35175d954f0e9e9aa37da817d73f15946b33668108107a960f16e6,PodSandboxId:ad8cf0f4e2aad339a5b055c1e288c1abf9ad495dc54bf838b810bf9a97ece9cf,Metadata:&ContainerMetad
ata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739189162273854450,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-234038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec735cdbe4b863dd4a6a820603186ae5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=557f25c9-e982-40ab-8c4f-4ad8ed99c51e name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.160205638Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f1f0ac96-e82e-471d-93c2-a0a8dca47d83 name=/runtime.v1.RuntimeService/Version
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.160273341Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f1f0ac96-e82e-471d-93c2-a0a8dca47d83 name=/runtime.v1.RuntimeService/Version
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.161269858Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e6192b5-b0aa-4e3a-afaa-980ff7d47cb7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.162724382Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739189448162699301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e6192b5-b0aa-4e3a-afaa-980ff7d47cb7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.164824256Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d98616e7-4059-428d-9e10-3aed34f65863 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.164938557Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d98616e7-4059-428d-9e10-3aed34f65863 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:10:48 addons-234038 crio[666]: time="2025-02-10 12:10:48.165480058Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3baed9a946b1058d6692b7700858f13bdad8bdca271168e76ab1adb50dcfb1b8,PodSandboxId:b61092316de91da690bc63232e37b3a7065e3810efc2969aad3ad2e1d58bcd7b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d41a14a4ecff96bdae6253ad2f58d8f258786db438307846081e8d835b984111,State:CONTAINER_RUNNING,CreatedAt:1739189307696719334,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b817e0be-815e-46cb-8d43-a875a079b5d0,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd76bef250f9df52aba5ee261a72e43d9af1605cf8f2ff4038632496885ae83,PodSandboxId:619ce0114532b6ac29c3db4ca360a1f7efbdd4ea996c42c265b5c7c330ebe4f1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1739189258864551163,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 530c1b30-7cd8-4330-8f5a-bc8389728c98,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c5e0d8fcee975309ea7fd53c5b418508bd8dd20ba5fa57f1e04f15b33c74df,PodSandboxId:c468896c725fbf863a37443f93a10564ac26a0393e301cb6b1c86b9cdcd77191,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1739189251568768732,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-bjzkm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b82de956-10ad-411c-b93d-602ce7161def,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5e1c4350cc6808586e3f4aa74208f890dda8d2bc33f5aa03f580c14983b0c9e5,PodSandboxId:14d24b5742c28004b2084f0e2f3fa8312708e1861b6d1bf5569dbe749b384923,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1739189251460384047,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqjd4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8c65ca85-96ec-4c1b-aeaf-306d9e765948,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b7545854a4a271ce1913024048180b8106b36b2018f11d40e24e8fbf8483ce0,PodSandboxId:a3990c2125278552bdd4a584ad5fbbc6fc145378981c117894ab32177ec465fd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739189233434410840,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-rmj79,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 71c1fa45-2d4a-4a62-b9b6-4042fe03346f,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e220645bcc51b6818e3acf57cc5c6761c8de4cea59104429a7f0fa586fdc1d70,PodSandboxId:d9bb0a1439eba9ad316bed3d118e8a3a5ce7ed17e7871b9a22db24a878144054,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1739189190744439600,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lngvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1849adde-c34e-49e5-9e0d-a90bd8296074,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d038a23255be07b29a40e1ec5cd6dc6e027493414e95f7241c308acbcd77472,PodSandboxId:86b7d072594edc74b334a597c0855a810e74005d63ed08a7f2b15c833beee94e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1739189188152060638,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e791c447-a669-44cd-aa10-6170ba473776,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10448f633cbb121721da724f7968c9ec57da4173de760911b865d5c2d1f73b2a,PodSandboxId:c4c2d67b929ee8dc9078be72d7c4190bacfb3653f1fac4365df7541e9917b5ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739189178888394642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26e4fe0a-2296-4f6f-b153-5a4241ddcc5a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4a71ed677d62285513d92e6781de7fe6d43ac83414862702f7566641512bee0,PodSandboxId:72a3339a8672ee8ccfad04677007c18553e1c0464d30bf98010af29be43c1408,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739189175385495332,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zlwf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 237090b7-3325-44dd-ba56-960c9f0e5498,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85749f84c3fc2973ff009ba09e8511da5134fc3e44a95c421d531
8887834d98b,PodSandboxId:bf002dea9ce4fb95d1a2880ce2777b5e085616e6c65f7c5b9b4d666fdc3e5695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739189173165005600,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-whfw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ceacc37-e5ce-4d69-868d-29d527e98de7,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c92752e59c0d2f0ef1c62a4bebc4f3fd37d46693624376461b65c746f60dce72,PodSandboxId:342f2ebf
37c1c07ed8093bfe22c59ab31559ad0d9ae98ce412c3f02e8029eb17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739189162374563614,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-234038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17187e3b35407d571a7f9875d98ea3ee,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62d9159e89009d81ce220ca1c8a046ff914e8f083ecddbb05457702d7f5aafb1,PodSandboxId:52b4d214b78be461effde06d5
e38c0988dfe060cbcf9101b2df40454eda9ef07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739189162378311385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-234038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5dbb4cfbbd5d3ad13e63f202949b007,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:368deae37d5f386b063c29e008c3141625bc33adb45c28519583f14bfa7af022,PodSandboxId:7c562b2c159bf7b7e0d973f6a908aa3725b0cb01b2
4b400977ff76891db8c028,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739189162340055629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-234038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1cd4a90d70d56431ef149d14b86f52,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba67e090d35175d954f0e9e9aa37da817d73f15946b33668108107a960f16e6,PodSandboxId:ad8cf0f4e2aad339a5b055c1e288c1abf9ad495dc54bf838b810bf9a97ece9cf,Metadata:&ContainerMetad
ata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739189162273854450,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-234038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec735cdbe4b863dd4a6a820603186ae5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d98616e7-4059-428d-9e10-3aed34f65863 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3baed9a946b10       docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da                              2 minutes ago       Running             nginx                     0                   b61092316de91       nginx
	6fd76bef250f9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   619ce0114532b       busybox
	f0c5e0d8fcee9       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   c468896c725fb       ingress-nginx-controller-56d7c84fd4-bjzkm
	5e1c4350cc680       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             3 minutes ago       Exited              patch                     2                   14d24b5742c28       ingress-nginx-admission-patch-xqjd4
	1b7545854a4a2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   a3990c2125278       ingress-nginx-admission-create-rmj79
	e220645bcc51b       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   d9bb0a1439eba       amd-gpu-device-plugin-lngvz
	6d038a23255be       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   86b7d072594ed       kube-ingress-dns-minikube
	10448f633cbb1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   c4c2d67b929ee       storage-provisioner
	d4a71ed677d62       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   72a3339a8672e       coredns-668d6bf9bc-zlwf7
	85749f84c3fc2       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                             4 minutes ago       Running             kube-proxy                0                   bf002dea9ce4f       kube-proxy-whfw2
	62d9159e89009       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                             4 minutes ago       Running             kube-scheduler            0                   52b4d214b78be       kube-scheduler-addons-234038
	c92752e59c0d2       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                             4 minutes ago       Running             kube-apiserver            0                   342f2ebf37c1c       kube-apiserver-addons-234038
	368deae37d5f3       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago       Running             etcd                      0                   7c562b2c159bf       etcd-addons-234038
	8ba67e090d351       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                             4 minutes ago       Running             kube-controller-manager   0                   ad8cf0f4e2aad       kube-controller-manager-addons-234038
	
	
	==> coredns [d4a71ed677d62285513d92e6781de7fe6d43ac83414862702f7566641512bee0] <==
	[INFO] 10.244.0.8:34156 - 11290 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000897994s
	[INFO] 10.244.0.8:34156 - 9336 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000093429s
	[INFO] 10.244.0.8:34156 - 55867 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000186035s
	[INFO] 10.244.0.8:34156 - 36509 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000094678s
	[INFO] 10.244.0.8:34156 - 34963 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000253251s
	[INFO] 10.244.0.8:34156 - 9611 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000109527s
	[INFO] 10.244.0.8:34156 - 27597 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000185338s
	[INFO] 10.244.0.8:39257 - 11568 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000111589s
	[INFO] 10.244.0.8:39257 - 11267 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000199919s
	[INFO] 10.244.0.8:41278 - 10800 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008458s
	[INFO] 10.244.0.8:41278 - 10563 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000178717s
	[INFO] 10.244.0.8:58618 - 62154 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000089331s
	[INFO] 10.244.0.8:58618 - 61889 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000269353s
	[INFO] 10.244.0.8:52784 - 3787 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000094561s
	[INFO] 10.244.0.8:52784 - 3359 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000278662s
	[INFO] 10.244.0.23:33632 - 32829 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000312534s
	[INFO] 10.244.0.23:32966 - 45178 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000225098s
	[INFO] 10.244.0.23:48285 - 31858 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000199834s
	[INFO] 10.244.0.23:48428 - 20905 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000283618s
	[INFO] 10.244.0.23:38745 - 43295 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000172719s
	[INFO] 10.244.0.23:44231 - 42620 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000198392s
	[INFO] 10.244.0.23:37945 - 8985 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003275863s
	[INFO] 10.244.0.23:59440 - 2372 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.003434391s
	[INFO] 10.244.0.27:48604 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000502352s
	[INFO] 10.244.0.27:46736 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000108988s
	
	
	==> describe nodes <==
	Name:               addons-234038
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-234038
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef65fd9d75393231710a2bc61f2cab58e3e6ecb2
	                    minikube.k8s.io/name=addons-234038
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_10T12_06_08_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-234038
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 12:06:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-234038
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 12:10:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 12:08:41 +0000   Mon, 10 Feb 2025 12:06:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 12:08:41 +0000   Mon, 10 Feb 2025 12:06:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 12:08:41 +0000   Mon, 10 Feb 2025 12:06:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 12:08:41 +0000   Mon, 10 Feb 2025 12:06:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.247
	  Hostname:    addons-234038
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 be92e3e52d4c4e6ea1f20b15a6ec52e1
	  System UUID:                be92e3e5-2d4c-4e6e-a1f2-0b15a6ec52e1
	  Boot ID:                    cfb76286-5b2e-4d76-a0ae-4eb03618b244
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m12s
	  default                     hello-world-app-7d9564db4-2qmkk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-bjzkm    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m29s
	  kube-system                 amd-gpu-device-plugin-lngvz                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 coredns-668d6bf9bc-zlwf7                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m36s
	  kube-system                 etcd-addons-234038                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m41s
	  kube-system                 kube-apiserver-addons-234038                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-controller-manager-addons-234038        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 kube-proxy-whfw2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 kube-scheduler-addons-234038                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m34s  kube-proxy       
	  Normal  Starting                 4m41s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m41s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m41s  kubelet          Node addons-234038 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m41s  kubelet          Node addons-234038 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m41s  kubelet          Node addons-234038 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m40s  kubelet          Node addons-234038 status is now: NodeReady
	  Normal  RegisteredNode           4m37s  node-controller  Node addons-234038 event: Registered Node addons-234038 in Controller
	
	
	==> dmesg <==
	[Feb10 12:06] systemd-fstab-generator[868]: Ignoring "noauto" option for root device
	[  +0.063397] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.990269] systemd-fstab-generator[1215]: Ignoring "noauto" option for root device
	[  +0.070259] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.389262] systemd-fstab-generator[1372]: Ignoring "noauto" option for root device
	[  +0.119658] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.019240] kauditd_printk_skb: 121 callbacks suppressed
	[  +5.002686] kauditd_printk_skb: 154 callbacks suppressed
	[  +5.306915] kauditd_printk_skb: 46 callbacks suppressed
	[ +14.069400] kauditd_printk_skb: 10 callbacks suppressed
	[Feb10 12:07] kauditd_printk_skb: 31 callbacks suppressed
	[  +6.337986] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.577006] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.460467] kauditd_printk_skb: 42 callbacks suppressed
	[  +7.539661] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.596852] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.504718] kauditd_printk_skb: 17 callbacks suppressed
	[ +16.921931] kauditd_printk_skb: 2 callbacks suppressed
	[Feb10 12:08] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.588353] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.912877] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.096368] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.463082] kauditd_printk_skb: 52 callbacks suppressed
	[  +5.469876] kauditd_printk_skb: 57 callbacks suppressed
	[ +28.162328] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [368deae37d5f386b063c29e008c3141625bc33adb45c28519583f14bfa7af022] <==
	{"level":"info","ts":"2025-02-10T12:07:30.262211Z","caller":"traceutil/trace.go:171","msg":"trace[612074534] transaction","detail":"{read_only:false; response_revision:1100; number_of_response:1; }","duration":"403.438937ms","start":"2025-02-10T12:07:29.858763Z","end":"2025-02-10T12:07:30.262202Z","steps":["trace[612074534] 'process raft request'  (duration: 403.174488ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:07:30.262302Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-10T12:07:29.858735Z","time spent":"403.517996ms","remote":"127.0.0.1:60668","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1094 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-02-10T12:07:30.262423Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"345.663286ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-10T12:07:30.262741Z","caller":"traceutil/trace.go:171","msg":"trace[1226734813] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1100; }","duration":"345.995782ms","start":"2025-02-10T12:07:29.916735Z","end":"2025-02-10T12:07:30.262731Z","steps":["trace[1226734813] 'agreement among raft nodes before linearized reading'  (duration: 345.667292ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:07:30.262856Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-10T12:07:29.916722Z","time spent":"346.120998ms","remote":"127.0.0.1:60684","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-02-10T12:07:30.263085Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"267.732368ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-10T12:07:30.263177Z","caller":"traceutil/trace.go:171","msg":"trace[1578352651] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1100; }","duration":"267.827265ms","start":"2025-02-10T12:07:29.995343Z","end":"2025-02-10T12:07:30.263170Z","steps":["trace[1578352651] 'agreement among raft nodes before linearized reading'  (duration: 267.721014ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:07:30.264947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.513526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-10T12:07:30.265105Z","caller":"traceutil/trace.go:171","msg":"trace[729657893] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1100; }","duration":"108.58729ms","start":"2025-02-10T12:07:30.156395Z","end":"2025-02-10T12:07:30.264982Z","steps":["trace[729657893] 'agreement among raft nodes before linearized reading'  (duration: 108.103194ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-10T12:07:34.530050Z","caller":"traceutil/trace.go:171","msg":"trace[490932119] transaction","detail":"{read_only:false; response_revision:1125; number_of_response:1; }","duration":"251.913042ms","start":"2025-02-10T12:07:34.278122Z","end":"2025-02-10T12:07:34.530035Z","steps":["trace[490932119] 'process raft request'  (duration: 251.539989ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-10T12:07:34.533870Z","caller":"traceutil/trace.go:171","msg":"trace[1462626125] linearizableReadLoop","detail":"{readStateIndex:1162; appliedIndex:1160; }","duration":"169.99953ms","start":"2025-02-10T12:07:34.363857Z","end":"2025-02-10T12:07:34.533857Z","steps":["trace[1462626125] 'read index received'  (duration: 165.547886ms)","trace[1462626125] 'applied index is now lower than readState.Index'  (duration: 4.451184ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-10T12:07:34.533948Z","caller":"traceutil/trace.go:171","msg":"trace[1453646431] transaction","detail":"{read_only:false; response_revision:1126; number_of_response:1; }","duration":"172.483693ms","start":"2025-02-10T12:07:34.361458Z","end":"2025-02-10T12:07:34.533942Z","steps":["trace[1453646431] 'process raft request'  (duration: 172.25366ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:07:34.534261Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.382565ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-xqjd4\" limit:1 ","response":"range_response_count:1 size:4430"}
	{"level":"info","ts":"2025-02-10T12:07:34.534303Z","caller":"traceutil/trace.go:171","msg":"trace[11875417] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-xqjd4; range_end:; response_count:1; response_revision:1126; }","duration":"170.440976ms","start":"2025-02-10T12:07:34.363854Z","end":"2025-02-10T12:07:34.534295Z","steps":["trace[11875417] 'agreement among raft nodes before linearized reading'  (duration: 170.332285ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:08:10.837813Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-10T12:08:10.422865Z","time spent":"414.945534ms","remote":"127.0.0.1:60602","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2025-02-10T12:08:10.838151Z","caller":"traceutil/trace.go:171","msg":"trace[1011987787] linearizableReadLoop","detail":"{readStateIndex:1430; appliedIndex:1430; }","duration":"403.838711ms","start":"2025-02-10T12:08:10.434301Z","end":"2025-02-10T12:08:10.838139Z","steps":["trace[1011987787] 'read index received'  (duration: 403.835781ms)","trace[1011987787] 'applied index is now lower than readState.Index'  (duration: 2.426µs)"],"step_count":2}
	{"level":"warn","ts":"2025-02-10T12:08:10.838716Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"404.401591ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-10T12:08:10.838780Z","caller":"traceutil/trace.go:171","msg":"trace[1329678599] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1383; }","duration":"404.492074ms","start":"2025-02-10T12:08:10.434281Z","end":"2025-02-10T12:08:10.838773Z","steps":["trace[1329678599] 'agreement among raft nodes before linearized reading'  (duration: 404.372814ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:08:10.838834Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-10T12:08:10.434270Z","time spent":"404.558417ms","remote":"127.0.0.1:60534","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-02-10T12:08:10.843588Z","caller":"traceutil/trace.go:171","msg":"trace[1627816422] transaction","detail":"{read_only:false; response_revision:1384; number_of_response:1; }","duration":"188.359485ms","start":"2025-02-10T12:08:10.655217Z","end":"2025-02-10T12:08:10.843576Z","steps":["trace[1627816422] 'process raft request'  (duration: 188.074481ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:08:10.844535Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.880517ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-02-10T12:08:10.844641Z","caller":"traceutil/trace.go:171","msg":"trace[99615998] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1384; }","duration":"114.079103ms","start":"2025-02-10T12:08:10.730551Z","end":"2025-02-10T12:08:10.844631Z","steps":["trace[99615998] 'agreement among raft nodes before linearized reading'  (duration: 113.557375ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:08:10.845017Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"303.338028ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-10T12:08:10.845111Z","caller":"traceutil/trace.go:171","msg":"trace[475474096] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; response_count:0; response_revision:1384; }","duration":"303.450951ms","start":"2025-02-10T12:08:10.541649Z","end":"2025-02-10T12:08:10.845100Z","steps":["trace[475474096] 'agreement among raft nodes before linearized reading'  (duration: 303.335953ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:08:10.845201Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-10T12:08:10.541635Z","time spent":"303.490545ms","remote":"127.0.0.1:60654","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":28,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" count_only:true "}
	
	
	==> kernel <==
	 12:10:48 up 5 min,  0 users,  load average: 0.35, 0.95, 0.50
	Linux addons-234038 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c92752e59c0d2f0ef1c62a4bebc4f3fd37d46693624376461b65c746f60dce72] <==
	I0210 12:06:58.097875       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0210 12:07:46.473925       1 conn.go:339] Error on socket receive: read tcp 192.168.39.247:8443->192.168.39.1:50600: use of closed network connection
	E0210 12:07:46.650104       1 conn.go:339] Error on socket receive: read tcp 192.168.39.247:8443->192.168.39.1:50644: use of closed network connection
	I0210 12:07:55.773126       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.81.190"}
	I0210 12:08:18.374370       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0210 12:08:21.933769       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0210 12:08:22.109448       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.247.235"}
	I0210 12:08:24.792028       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0210 12:08:25.926440       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0210 12:08:37.301343       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0210 12:08:37.301469       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0210 12:08:37.328543       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0210 12:08:37.328681       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0210 12:08:37.406650       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0210 12:08:37.406775       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0210 12:08:37.409926       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0210 12:08:37.410228       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0210 12:08:37.553608       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0210 12:08:37.553648       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0210 12:08:38.411395       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0210 12:08:38.553881       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0210 12:08:38.559883       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0210 12:08:42.647960       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0210 12:08:59.013605       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0210 12:10:47.066333       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.142.180"}
	
	
	==> kube-controller-manager [8ba67e090d35175d954f0e9e9aa37da817d73f15946b33668108107a960f16e6] <==
	E0210 12:09:29.256832       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0210 12:09:58.018478       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0210 12:09:58.019713       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0210 12:09:58.020680       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0210 12:09:58.020765       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0210 12:09:58.484314       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0210 12:09:58.485635       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0210 12:09:58.486483       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0210 12:09:58.486590       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0210 12:10:02.023458       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0210 12:10:02.024416       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0210 12:10:02.025174       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0210 12:10:02.025229       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0210 12:10:08.427584       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0210 12:10:08.428637       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0210 12:10:08.429925       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0210 12:10:08.430585       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0210 12:10:46.854368       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="26.317764ms"
	I0210 12:10:46.869273       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="14.524059ms"
	I0210 12:10:46.900600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="31.194558ms"
	I0210 12:10:46.900704       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="64.828µs"
	W0210 12:10:47.885876       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0210 12:10:47.887229       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0210 12:10:47.888122       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0210 12:10:47.888174       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [85749f84c3fc2973ff009ba09e8511da5134fc3e44a95c421d5318887834d98b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0210 12:06:13.953098       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0210 12:06:13.966132       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.247"]
	E0210 12:06:13.972564       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 12:06:14.096111       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0210 12:06:14.096138       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0210 12:06:14.096161       1 server_linux.go:170] "Using iptables Proxier"
	I0210 12:06:14.098464       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 12:06:14.098779       1 server.go:497] "Version info" version="v1.32.1"
	I0210 12:06:14.098802       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:06:14.100611       1 config.go:199] "Starting service config controller"
	I0210 12:06:14.100637       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 12:06:14.100672       1 config.go:105] "Starting endpoint slice config controller"
	I0210 12:06:14.100676       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 12:06:14.101028       1 config.go:329] "Starting node config controller"
	I0210 12:06:14.101033       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 12:06:14.200789       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0210 12:06:14.200814       1 shared_informer.go:320] Caches are synced for service config
	I0210 12:06:14.201112       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [62d9159e89009d81ce220ca1c8a046ff914e8f083ecddbb05457702d7f5aafb1] <==
	W0210 12:06:05.612556       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0210 12:06:05.612603       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0210 12:06:05.635181       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0210 12:06:05.635229       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:06:05.646115       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0210 12:06:05.646223       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0210 12:06:05.722653       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0210 12:06:05.722708       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 12:06:05.735990       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0210 12:06:05.736035       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:06:05.750264       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0210 12:06:05.750310       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 12:06:05.754717       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0210 12:06:05.754758       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 12:06:05.865050       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0210 12:06:05.865094       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:06:05.892690       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0210 12:06:05.892771       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 12:06:05.918195       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0210 12:06:05.918284       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 12:06:05.920431       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0210 12:06:05.920791       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:06:05.920649       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0210 12:06:05.920959       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:06:08.541319       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 10 12:10:07 addons-234038 kubelet[1222]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 10 12:10:07 addons-234038 kubelet[1222]: E0210 12:10:07.692893    1222 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739189407692640872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 12:10:07 addons-234038 kubelet[1222]: E0210 12:10:07.692927    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739189407692640872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 12:10:17 addons-234038 kubelet[1222]: E0210 12:10:17.695327    1222 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739189417695022178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 12:10:17 addons-234038 kubelet[1222]: E0210 12:10:17.695366    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739189417695022178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 12:10:27 addons-234038 kubelet[1222]: E0210 12:10:27.697855    1222 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739189427697576310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 12:10:27 addons-234038 kubelet[1222]: E0210 12:10:27.697897    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739189427697576310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 12:10:37 addons-234038 kubelet[1222]: E0210 12:10:37.700763    1222 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739189437700285973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 12:10:37 addons-234038 kubelet[1222]: E0210 12:10:37.701069    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739189437700285973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 12:10:46 addons-234038 kubelet[1222]: I0210 12:10:46.861812    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="5f249829-1b29-4874-a350-1dce0255b97c" containerName="helper-pod"
	Feb 10 12:10:46 addons-234038 kubelet[1222]: I0210 12:10:46.861857    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="186e78c8-7df7-4547-bd82-47c0e91608fd" containerName="local-path-provisioner"
	Feb 10 12:10:46 addons-234038 kubelet[1222]: I0210 12:10:46.861864    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="ebb0833b-d862-4ad0-a942-2c6577d5ff4b" containerName="task-pv-container"
	Feb 10 12:10:46 addons-234038 kubelet[1222]: I0210 12:10:46.861870    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="0b3851a5-203e-49c0-b159-5e1cc014c915" containerName="csi-resizer"
	Feb 10 12:10:46 addons-234038 kubelet[1222]: I0210 12:10:46.861875    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="f4827625-8687-4ef1-ab55-010498bb569e" containerName="liveness-probe"
	Feb 10 12:10:46 addons-234038 kubelet[1222]: I0210 12:10:46.861879    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="6bb4c9ba-39b4-4471-bda5-8802ce0be34d" containerName="volume-snapshot-controller"
	Feb 10 12:10:46 addons-234038 kubelet[1222]: I0210 12:10:46.861883    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="f4827625-8687-4ef1-ab55-010498bb569e" containerName="hostpath"
	Feb 10 12:10:46 addons-234038 kubelet[1222]: I0210 12:10:46.861888    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="f4827625-8687-4ef1-ab55-010498bb569e" containerName="csi-external-health-monitor-controller"
	Feb 10 12:10:46 addons-234038 kubelet[1222]: I0210 12:10:46.861894    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="f4827625-8687-4ef1-ab55-010498bb569e" containerName="csi-snapshotter"
	Feb 10 12:10:46 addons-234038 kubelet[1222]: I0210 12:10:46.861899    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="8bc3b8c0-05f1-4a73-a4d1-28e8aa056257" containerName="volume-snapshot-controller"
	Feb 10 12:10:46 addons-234038 kubelet[1222]: I0210 12:10:46.861904    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="52526ae9-6955-4d29-893a-11c01a9ac90d" containerName="csi-attacher"
	Feb 10 12:10:46 addons-234038 kubelet[1222]: I0210 12:10:46.861908    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="f4827625-8687-4ef1-ab55-010498bb569e" containerName="node-driver-registrar"
	Feb 10 12:10:46 addons-234038 kubelet[1222]: I0210 12:10:46.861913    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="f4827625-8687-4ef1-ab55-010498bb569e" containerName="csi-provisioner"
	Feb 10 12:10:46 addons-234038 kubelet[1222]: I0210 12:10:46.862813    1222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt7vf\" (UniqueName: \"kubernetes.io/projected/45456a45-5138-47aa-a6f9-1fd2658a3c2f-kube-api-access-rt7vf\") pod \"hello-world-app-7d9564db4-2qmkk\" (UID: \"45456a45-5138-47aa-a6f9-1fd2658a3c2f\") " pod="default/hello-world-app-7d9564db4-2qmkk"
	Feb 10 12:10:47 addons-234038 kubelet[1222]: E0210 12:10:47.704010    1222 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739189447703733787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 12:10:47 addons-234038 kubelet[1222]: E0210 12:10:47.704045    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739189447703733787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [10448f633cbb121721da724f7968c9ec57da4173de760911b865d5c2d1f73b2a] <==
	I0210 12:06:19.434700       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0210 12:06:19.491220       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0210 12:06:19.491344       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0210 12:06:19.512663       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0210 12:06:19.512801       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-234038_349723b9-6519-4b74-a4c2-a2738667e66c!
	I0210 12:06:19.513558       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d8bede14-1bcd-4955-9de6-fb17c800ed86", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-234038_349723b9-6519-4b74-a4c2-a2738667e66c became leader
	I0210 12:06:19.612979       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-234038_349723b9-6519-4b74-a4c2-a2738667e66c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-234038 -n addons-234038
helpers_test.go:261: (dbg) Run:  kubectl --context addons-234038 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-2qmkk ingress-nginx-admission-create-rmj79 ingress-nginx-admission-patch-xqjd4
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-234038 describe pod hello-world-app-7d9564db4-2qmkk ingress-nginx-admission-create-rmj79 ingress-nginx-admission-patch-xqjd4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-234038 describe pod hello-world-app-7d9564db4-2qmkk ingress-nginx-admission-create-rmj79 ingress-nginx-admission-patch-xqjd4: exit status 1 (66.37944ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-2qmkk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-234038/192.168.39.247
	Start Time:       Mon, 10 Feb 2025 12:10:46 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rt7vf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rt7vf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-2qmkk to addons-234038
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rmj79" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xqjd4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-234038 describe pod hello-world-app-7d9564db4-2qmkk ingress-nginx-admission-create-rmj79 ingress-nginx-admission-patch-xqjd4: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-234038 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-234038 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-234038 addons disable ingress --alsologtostderr -v=1: (7.743781212s)
--- FAIL: TestAddons/parallel/Ingress (156.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 image ls --format short --alsologtostderr
functional_test.go:278: (dbg) Done: out/minikube-linux-amd64 -p functional-653300 image ls --format short --alsologtostderr: (2.279559822s)
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-653300 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-653300 image ls --format short --alsologtostderr:
I0210 12:16:12.036409  640889 out.go:345] Setting OutFile to fd 1 ...
I0210 12:16:12.036535  640889 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:16:12.036545  640889 out.go:358] Setting ErrFile to fd 2...
I0210 12:16:12.036550  640889 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:16:12.036776  640889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
I0210 12:16:12.037490  640889 config.go:182] Loaded profile config "functional-653300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 12:16:12.037611  640889 config.go:182] Loaded profile config "functional-653300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 12:16:12.037986  640889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 12:16:12.038054  640889 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 12:16:12.053806  640889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44201
I0210 12:16:12.054328  640889 main.go:141] libmachine: () Calling .GetVersion
I0210 12:16:12.054974  640889 main.go:141] libmachine: Using API Version  1
I0210 12:16:12.054998  640889 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 12:16:12.055353  640889 main.go:141] libmachine: () Calling .GetMachineName
I0210 12:16:12.055551  640889 main.go:141] libmachine: (functional-653300) Calling .GetState
I0210 12:16:12.057436  640889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 12:16:12.057482  640889 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 12:16:12.073310  640889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38563
I0210 12:16:12.073732  640889 main.go:141] libmachine: () Calling .GetVersion
I0210 12:16:12.074279  640889 main.go:141] libmachine: Using API Version  1
I0210 12:16:12.074306  640889 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 12:16:12.074627  640889 main.go:141] libmachine: () Calling .GetMachineName
I0210 12:16:12.074829  640889 main.go:141] libmachine: (functional-653300) Calling .DriverName
I0210 12:16:12.075028  640889 ssh_runner.go:195] Run: systemctl --version
I0210 12:16:12.075057  640889 main.go:141] libmachine: (functional-653300) Calling .GetSSHHostname
I0210 12:16:12.077788  640889 main.go:141] libmachine: (functional-653300) DBG | domain functional-653300 has defined MAC address 52:54:00:09:ba:a1 in network mk-functional-653300
I0210 12:16:12.078191  640889 main.go:141] libmachine: (functional-653300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ba:a1", ip: ""} in network mk-functional-653300: {Iface:virbr1 ExpiryTime:2025-02-10 13:13:34 +0000 UTC Type:0 Mac:52:54:00:09:ba:a1 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:functional-653300 Clientid:01:52:54:00:09:ba:a1}
I0210 12:16:12.078231  640889 main.go:141] libmachine: (functional-653300) DBG | domain functional-653300 has defined IP address 192.168.50.60 and MAC address 52:54:00:09:ba:a1 in network mk-functional-653300
I0210 12:16:12.078322  640889 main.go:141] libmachine: (functional-653300) Calling .GetSSHPort
I0210 12:16:12.078506  640889 main.go:141] libmachine: (functional-653300) Calling .GetSSHKeyPath
I0210 12:16:12.078669  640889 main.go:141] libmachine: (functional-653300) Calling .GetSSHUsername
I0210 12:16:12.078806  640889 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/functional-653300/id_rsa Username:docker}
I0210 12:16:12.172978  640889 ssh_runner.go:195] Run: sudo crictl images --output json
I0210 12:16:14.260308  640889 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.087288466s)
W0210 12:16:14.260403  640889 cache_images.go:734] Failed to list images for profile functional-653300 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E0210 12:16:14.248583    8561 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,},}"
time="2025-02-10T12:16:14Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
I0210 12:16:14.260459  640889 main.go:141] libmachine: Making call to close driver server
I0210 12:16:14.260477  640889 main.go:141] libmachine: (functional-653300) Calling .Close
I0210 12:16:14.260781  640889 main.go:141] libmachine: Successfully made call to close driver server
I0210 12:16:14.260802  640889 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 12:16:14.260817  640889 main.go:141] libmachine: Making call to close driver server
I0210 12:16:14.260826  640889 main.go:141] libmachine: (functional-653300) Calling .Close
I0210 12:16:14.261099  640889 main.go:141] libmachine: Successfully made call to close driver server
I0210 12:16:14.261131  640889 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:292: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.28s)

                                                
                                    
x
+
TestPreload (281.12s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-860024 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-860024 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m4.68623241s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-860024 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-860024 image pull gcr.io/k8s-minikube/busybox: (2.37129584s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-860024
E0210 13:00:46.485326  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-860024: (1m30.787064072s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-860024 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0210 13:02:19.419244  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-860024 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m0.21215327s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-860024 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2025-02-10 13:02:24.064078184 +0000 UTC m=+3432.443932251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-860024 -n test-preload-860024
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-860024 logs -n 25
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-790589 ssh -n                                                                 | multinode-790589     | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|         | multinode-790589-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-790589 ssh -n multinode-790589 sudo cat                                       | multinode-790589     | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|         | /home/docker/cp-test_multinode-790589-m03_multinode-790589.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-790589 cp multinode-790589-m03:/home/docker/cp-test.txt                       | multinode-790589     | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|         | multinode-790589-m02:/home/docker/cp-test_multinode-790589-m03_multinode-790589-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-790589 ssh -n                                                                 | multinode-790589     | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|         | multinode-790589-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-790589 ssh -n multinode-790589-m02 sudo cat                                   | multinode-790589     | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|         | /home/docker/cp-test_multinode-790589-m03_multinode-790589-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-790589 node stop m03                                                          | multinode-790589     | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	| node    | multinode-790589 node start                                                             | multinode-790589     | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:46 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-790589                                                                | multinode-790589     | jenkins | v1.35.0 | 10 Feb 25 12:46 UTC |                     |
	| stop    | -p multinode-790589                                                                     | multinode-790589     | jenkins | v1.35.0 | 10 Feb 25 12:46 UTC | 10 Feb 25 12:49 UTC |
	| start   | -p multinode-790589                                                                     | multinode-790589     | jenkins | v1.35.0 | 10 Feb 25 12:49 UTC | 10 Feb 25 12:51 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-790589                                                                | multinode-790589     | jenkins | v1.35.0 | 10 Feb 25 12:51 UTC |                     |
	| node    | multinode-790589 node delete                                                            | multinode-790589     | jenkins | v1.35.0 | 10 Feb 25 12:51 UTC | 10 Feb 25 12:52 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-790589 stop                                                                   | multinode-790589     | jenkins | v1.35.0 | 10 Feb 25 12:52 UTC | 10 Feb 25 12:55 UTC |
	| start   | -p multinode-790589                                                                     | multinode-790589     | jenkins | v1.35.0 | 10 Feb 25 12:55 UTC | 10 Feb 25 12:56 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-790589                                                                | multinode-790589     | jenkins | v1.35.0 | 10 Feb 25 12:56 UTC |                     |
	| start   | -p multinode-790589-m02                                                                 | multinode-790589-m02 | jenkins | v1.35.0 | 10 Feb 25 12:56 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-790589-m03                                                                 | multinode-790589-m03 | jenkins | v1.35.0 | 10 Feb 25 12:56 UTC | 10 Feb 25 12:57 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-790589                                                                 | multinode-790589     | jenkins | v1.35.0 | 10 Feb 25 12:57 UTC |                     |
	| delete  | -p multinode-790589-m03                                                                 | multinode-790589-m03 | jenkins | v1.35.0 | 10 Feb 25 12:57 UTC | 10 Feb 25 12:57 UTC |
	| delete  | -p multinode-790589                                                                     | multinode-790589     | jenkins | v1.35.0 | 10 Feb 25 12:57 UTC | 10 Feb 25 12:57 UTC |
	| start   | -p test-preload-860024                                                                  | test-preload-860024  | jenkins | v1.35.0 | 10 Feb 25 12:57 UTC | 10 Feb 25 12:59 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-860024 image pull                                                          | test-preload-860024  | jenkins | v1.35.0 | 10 Feb 25 12:59 UTC | 10 Feb 25 12:59 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-860024                                                                  | test-preload-860024  | jenkins | v1.35.0 | 10 Feb 25 12:59 UTC | 10 Feb 25 13:01 UTC |
	| start   | -p test-preload-860024                                                                  | test-preload-860024  | jenkins | v1.35.0 | 10 Feb 25 13:01 UTC | 10 Feb 25 13:02 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-860024 image list                                                          | test-preload-860024  | jenkins | v1.35.0 | 10 Feb 25 13:02 UTC | 10 Feb 25 13:02 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 13:01:23
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 13:01:23.683572  663506 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:01:23.683678  663506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:01:23.683683  663506 out.go:358] Setting ErrFile to fd 2...
	I0210 13:01:23.683687  663506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:01:23.683862  663506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
	I0210 13:01:23.684420  663506 out.go:352] Setting JSON to false
	I0210 13:01:23.685370  663506 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":17034,"bootTime":1739175450,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 13:01:23.685484  663506 start.go:139] virtualization: kvm guest
	I0210 13:01:23.687956  663506 out.go:177] * [test-preload-860024] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 13:01:23.689450  663506 out.go:177]   - MINIKUBE_LOCATION=20383
	I0210 13:01:23.689458  663506 notify.go:220] Checking for updates...
	I0210 13:01:23.692238  663506 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 13:01:23.693607  663506 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:01:23.694793  663506 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	I0210 13:01:23.696057  663506 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 13:01:23.697203  663506 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 13:01:23.698656  663506 config.go:182] Loaded profile config "test-preload-860024": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0210 13:01:23.699089  663506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:01:23.699149  663506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:01:23.714314  663506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40593
	I0210 13:01:23.714770  663506 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:01:23.715343  663506 main.go:141] libmachine: Using API Version  1
	I0210 13:01:23.715368  663506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:01:23.715690  663506 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:01:23.715910  663506 main.go:141] libmachine: (test-preload-860024) Calling .DriverName
	I0210 13:01:23.717602  663506 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0210 13:01:23.718836  663506 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 13:01:23.719161  663506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:01:23.719211  663506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:01:23.733830  663506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39721
	I0210 13:01:23.734225  663506 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:01:23.734717  663506 main.go:141] libmachine: Using API Version  1
	I0210 13:01:23.734737  663506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:01:23.735065  663506 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:01:23.735269  663506 main.go:141] libmachine: (test-preload-860024) Calling .DriverName
	I0210 13:01:23.770220  663506 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 13:01:23.772279  663506 start.go:297] selected driver: kvm2
	I0210 13:01:23.772298  663506 start.go:901] validating driver "kvm2" against &{Name:test-preload-860024 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-860024
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:01:23.772417  663506 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 13:01:23.773072  663506 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:01:23.773176  663506 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20383-625153/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 13:01:23.788026  663506 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 13:01:23.788444  663506 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 13:01:23.788479  663506 cni.go:84] Creating CNI manager for ""
	I0210 13:01:23.788542  663506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:01:23.788610  663506 start.go:340] cluster config:
	{Name:test-preload-860024 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-860024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:01:23.788734  663506 iso.go:125] acquiring lock: {Name:mk013d189757e85c0699014f5ef29205d9d4927f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:01:23.790773  663506 out.go:177] * Starting "test-preload-860024" primary control-plane node in "test-preload-860024" cluster
	I0210 13:01:23.792116  663506 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0210 13:01:23.817259  663506 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0210 13:01:23.817286  663506 cache.go:56] Caching tarball of preloaded images
	I0210 13:01:23.817439  663506 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0210 13:01:23.819144  663506 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0210 13:01:23.820309  663506 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0210 13:01:23.849834  663506 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0210 13:01:26.261955  663506 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0210 13:01:26.262057  663506 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0210 13:01:27.121896  663506 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0210 13:01:27.122036  663506 profile.go:143] Saving config to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/test-preload-860024/config.json ...
	I0210 13:01:27.122279  663506 start.go:360] acquireMachinesLock for test-preload-860024: {Name:mk28e87da66de739a4c7c70d1fb5afc4ce31a4d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 13:01:27.122359  663506 start.go:364] duration metric: took 53.261µs to acquireMachinesLock for "test-preload-860024"
	I0210 13:01:27.122382  663506 start.go:96] Skipping create...Using existing machine configuration
	I0210 13:01:27.122393  663506 fix.go:54] fixHost starting: 
	I0210 13:01:27.122674  663506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:01:27.122719  663506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:01:27.137843  663506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38263
	I0210 13:01:27.138300  663506 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:01:27.138809  663506 main.go:141] libmachine: Using API Version  1
	I0210 13:01:27.138836  663506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:01:27.139175  663506 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:01:27.139384  663506 main.go:141] libmachine: (test-preload-860024) Calling .DriverName
	I0210 13:01:27.139540  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetState
	I0210 13:01:27.141164  663506 fix.go:112] recreateIfNeeded on test-preload-860024: state=Stopped err=<nil>
	I0210 13:01:27.141196  663506 main.go:141] libmachine: (test-preload-860024) Calling .DriverName
	W0210 13:01:27.141365  663506 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 13:01:27.144223  663506 out.go:177] * Restarting existing kvm2 VM for "test-preload-860024" ...
	I0210 13:01:27.145633  663506 main.go:141] libmachine: (test-preload-860024) Calling .Start
	I0210 13:01:27.145809  663506 main.go:141] libmachine: (test-preload-860024) starting domain...
	I0210 13:01:27.145828  663506 main.go:141] libmachine: (test-preload-860024) ensuring networks are active...
	I0210 13:01:27.146647  663506 main.go:141] libmachine: (test-preload-860024) Ensuring network default is active
	I0210 13:01:27.147002  663506 main.go:141] libmachine: (test-preload-860024) Ensuring network mk-test-preload-860024 is active
	I0210 13:01:27.147520  663506 main.go:141] libmachine: (test-preload-860024) getting domain XML...
	I0210 13:01:27.148265  663506 main.go:141] libmachine: (test-preload-860024) creating domain...
	I0210 13:01:28.355733  663506 main.go:141] libmachine: (test-preload-860024) waiting for IP...
	I0210 13:01:28.356649  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:28.357005  663506 main.go:141] libmachine: (test-preload-860024) DBG | unable to find current IP address of domain test-preload-860024 in network mk-test-preload-860024
	I0210 13:01:28.357082  663506 main.go:141] libmachine: (test-preload-860024) DBG | I0210 13:01:28.356987  663557 retry.go:31] will retry after 212.190739ms: waiting for domain to come up
	I0210 13:01:28.571335  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:28.571722  663506 main.go:141] libmachine: (test-preload-860024) DBG | unable to find current IP address of domain test-preload-860024 in network mk-test-preload-860024
	I0210 13:01:28.571746  663506 main.go:141] libmachine: (test-preload-860024) DBG | I0210 13:01:28.571684  663557 retry.go:31] will retry after 308.044336ms: waiting for domain to come up
	I0210 13:01:28.881397  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:28.881908  663506 main.go:141] libmachine: (test-preload-860024) DBG | unable to find current IP address of domain test-preload-860024 in network mk-test-preload-860024
	I0210 13:01:28.881940  663506 main.go:141] libmachine: (test-preload-860024) DBG | I0210 13:01:28.881877  663557 retry.go:31] will retry after 435.367089ms: waiting for domain to come up
	I0210 13:01:29.318807  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:29.319185  663506 main.go:141] libmachine: (test-preload-860024) DBG | unable to find current IP address of domain test-preload-860024 in network mk-test-preload-860024
	I0210 13:01:29.319274  663506 main.go:141] libmachine: (test-preload-860024) DBG | I0210 13:01:29.319177  663557 retry.go:31] will retry after 484.330921ms: waiting for domain to come up
	I0210 13:01:29.804835  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:29.805323  663506 main.go:141] libmachine: (test-preload-860024) DBG | unable to find current IP address of domain test-preload-860024 in network mk-test-preload-860024
	I0210 13:01:29.805362  663506 main.go:141] libmachine: (test-preload-860024) DBG | I0210 13:01:29.805292  663557 retry.go:31] will retry after 691.546271ms: waiting for domain to come up
	I0210 13:01:30.498541  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:30.498974  663506 main.go:141] libmachine: (test-preload-860024) DBG | unable to find current IP address of domain test-preload-860024 in network mk-test-preload-860024
	I0210 13:01:30.499001  663506 main.go:141] libmachine: (test-preload-860024) DBG | I0210 13:01:30.498959  663557 retry.go:31] will retry after 624.400688ms: waiting for domain to come up
	I0210 13:01:31.124535  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:31.124959  663506 main.go:141] libmachine: (test-preload-860024) DBG | unable to find current IP address of domain test-preload-860024 in network mk-test-preload-860024
	I0210 13:01:31.124989  663506 main.go:141] libmachine: (test-preload-860024) DBG | I0210 13:01:31.124909  663557 retry.go:31] will retry after 726.555562ms: waiting for domain to come up
	I0210 13:01:31.852989  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:31.853486  663506 main.go:141] libmachine: (test-preload-860024) DBG | unable to find current IP address of domain test-preload-860024 in network mk-test-preload-860024
	I0210 13:01:31.853520  663506 main.go:141] libmachine: (test-preload-860024) DBG | I0210 13:01:31.853478  663557 retry.go:31] will retry after 1.388620269s: waiting for domain to come up
	I0210 13:01:33.243459  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:33.243812  663506 main.go:141] libmachine: (test-preload-860024) DBG | unable to find current IP address of domain test-preload-860024 in network mk-test-preload-860024
	I0210 13:01:33.243855  663506 main.go:141] libmachine: (test-preload-860024) DBG | I0210 13:01:33.243810  663557 retry.go:31] will retry after 1.463945252s: waiting for domain to come up
	I0210 13:01:34.709478  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:34.709849  663506 main.go:141] libmachine: (test-preload-860024) DBG | unable to find current IP address of domain test-preload-860024 in network mk-test-preload-860024
	I0210 13:01:34.709878  663506 main.go:141] libmachine: (test-preload-860024) DBG | I0210 13:01:34.709809  663557 retry.go:31] will retry after 1.873500766s: waiting for domain to come up
	I0210 13:01:36.585234  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:36.585699  663506 main.go:141] libmachine: (test-preload-860024) DBG | unable to find current IP address of domain test-preload-860024 in network mk-test-preload-860024
	I0210 13:01:36.585742  663506 main.go:141] libmachine: (test-preload-860024) DBG | I0210 13:01:36.585639  663557 retry.go:31] will retry after 2.601915667s: waiting for domain to come up
	I0210 13:01:39.188789  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:39.189216  663506 main.go:141] libmachine: (test-preload-860024) DBG | unable to find current IP address of domain test-preload-860024 in network mk-test-preload-860024
	I0210 13:01:39.189248  663506 main.go:141] libmachine: (test-preload-860024) DBG | I0210 13:01:39.189174  663557 retry.go:31] will retry after 2.665583053s: waiting for domain to come up
	I0210 13:01:41.858036  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:41.858569  663506 main.go:141] libmachine: (test-preload-860024) DBG | unable to find current IP address of domain test-preload-860024 in network mk-test-preload-860024
	I0210 13:01:41.858592  663506 main.go:141] libmachine: (test-preload-860024) DBG | I0210 13:01:41.858534  663557 retry.go:31] will retry after 4.499468038s: waiting for domain to come up
	I0210 13:01:46.362579  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:46.362990  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has current primary IP address 192.168.39.223 and MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:46.363010  663506 main.go:141] libmachine: (test-preload-860024) found domain IP: 192.168.39.223
	I0210 13:01:46.363019  663506 main.go:141] libmachine: (test-preload-860024) reserving static IP address...
	I0210 13:01:46.363520  663506 main.go:141] libmachine: (test-preload-860024) DBG | found host DHCP lease matching {name: "test-preload-860024", mac: "52:54:00:3e:ca:83", ip: "192.168.39.223"} in network mk-test-preload-860024: {Iface:virbr1 ExpiryTime:2025-02-10 14:01:38 +0000 UTC Type:0 Mac:52:54:00:3e:ca:83 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-860024 Clientid:01:52:54:00:3e:ca:83}
	I0210 13:01:46.363546  663506 main.go:141] libmachine: (test-preload-860024) reserved static IP address 192.168.39.223 for domain test-preload-860024
	I0210 13:01:46.363565  663506 main.go:141] libmachine: (test-preload-860024) DBG | skip adding static IP to network mk-test-preload-860024 - found existing host DHCP lease matching {name: "test-preload-860024", mac: "52:54:00:3e:ca:83", ip: "192.168.39.223"}
	I0210 13:01:46.363588  663506 main.go:141] libmachine: (test-preload-860024) DBG | Getting to WaitForSSH function...
	I0210 13:01:46.363601  663506 main.go:141] libmachine: (test-preload-860024) waiting for SSH...
	I0210 13:01:46.366023  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:46.366388  663506 main.go:141] libmachine: (test-preload-860024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:ca:83", ip: ""} in network mk-test-preload-860024: {Iface:virbr1 ExpiryTime:2025-02-10 14:01:38 +0000 UTC Type:0 Mac:52:54:00:3e:ca:83 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-860024 Clientid:01:52:54:00:3e:ca:83}
	I0210 13:01:46.366434  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined IP address 192.168.39.223 and MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:46.366603  663506 main.go:141] libmachine: (test-preload-860024) DBG | Using SSH client type: external
	I0210 13:01:46.366625  663506 main.go:141] libmachine: (test-preload-860024) DBG | Using SSH private key: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/test-preload-860024/id_rsa (-rw-------)
	I0210 13:01:46.366656  663506 main.go:141] libmachine: (test-preload-860024) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20383-625153/.minikube/machines/test-preload-860024/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 13:01:46.366679  663506 main.go:141] libmachine: (test-preload-860024) DBG | About to run SSH command:
	I0210 13:01:46.366703  663506 main.go:141] libmachine: (test-preload-860024) DBG | exit 0
	I0210 13:01:46.488878  663506 main.go:141] libmachine: (test-preload-860024) DBG | SSH cmd err, output: <nil>: 
	I0210 13:01:46.489306  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetConfigRaw
	I0210 13:01:46.490031  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetIP
	I0210 13:01:46.492923  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:46.493378  663506 main.go:141] libmachine: (test-preload-860024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:ca:83", ip: ""} in network mk-test-preload-860024: {Iface:virbr1 ExpiryTime:2025-02-10 14:01:38 +0000 UTC Type:0 Mac:52:54:00:3e:ca:83 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-860024 Clientid:01:52:54:00:3e:ca:83}
	I0210 13:01:46.493404  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined IP address 192.168.39.223 and MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:46.493694  663506 profile.go:143] Saving config to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/test-preload-860024/config.json ...
	I0210 13:01:46.493930  663506 machine.go:93] provisionDockerMachine start ...
	I0210 13:01:46.493954  663506 main.go:141] libmachine: (test-preload-860024) Calling .DriverName
	I0210 13:01:46.494249  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHHostname
	I0210 13:01:46.496525  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:46.496863  663506 main.go:141] libmachine: (test-preload-860024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:ca:83", ip: ""} in network mk-test-preload-860024: {Iface:virbr1 ExpiryTime:2025-02-10 14:01:38 +0000 UTC Type:0 Mac:52:54:00:3e:ca:83 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-860024 Clientid:01:52:54:00:3e:ca:83}
	I0210 13:01:46.496885  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined IP address 192.168.39.223 and MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:46.497043  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHPort
	I0210 13:01:46.497219  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHKeyPath
	I0210 13:01:46.497352  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHKeyPath
	I0210 13:01:46.497422  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHUsername
	I0210 13:01:46.497563  663506 main.go:141] libmachine: Using SSH client type: native
	I0210 13:01:46.497817  663506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0210 13:01:46.497835  663506 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 13:01:46.597236  663506 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 13:01:46.597274  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetMachineName
	I0210 13:01:46.597576  663506 buildroot.go:166] provisioning hostname "test-preload-860024"
	I0210 13:01:46.597609  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetMachineName
	I0210 13:01:46.597784  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHHostname
	I0210 13:01:46.600532  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:46.600920  663506 main.go:141] libmachine: (test-preload-860024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:ca:83", ip: ""} in network mk-test-preload-860024: {Iface:virbr1 ExpiryTime:2025-02-10 14:01:38 +0000 UTC Type:0 Mac:52:54:00:3e:ca:83 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-860024 Clientid:01:52:54:00:3e:ca:83}
	I0210 13:01:46.600956  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined IP address 192.168.39.223 and MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:46.601082  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHPort
	I0210 13:01:46.601266  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHKeyPath
	I0210 13:01:46.601442  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHKeyPath
	I0210 13:01:46.601532  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHUsername
	I0210 13:01:46.601693  663506 main.go:141] libmachine: Using SSH client type: native
	I0210 13:01:46.601936  663506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0210 13:01:46.601951  663506 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-860024 && echo "test-preload-860024" | sudo tee /etc/hostname
	I0210 13:01:46.715896  663506 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-860024
	
	I0210 13:01:46.715925  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHHostname
	I0210 13:01:46.718781  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:46.719115  663506 main.go:141] libmachine: (test-preload-860024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:ca:83", ip: ""} in network mk-test-preload-860024: {Iface:virbr1 ExpiryTime:2025-02-10 14:01:38 +0000 UTC Type:0 Mac:52:54:00:3e:ca:83 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-860024 Clientid:01:52:54:00:3e:ca:83}
	I0210 13:01:46.719149  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined IP address 192.168.39.223 and MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:46.719324  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHPort
	I0210 13:01:46.719487  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHKeyPath
	I0210 13:01:46.719645  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHKeyPath
	I0210 13:01:46.719746  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHUsername
	I0210 13:01:46.719904  663506 main.go:141] libmachine: Using SSH client type: native
	I0210 13:01:46.720069  663506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0210 13:01:46.720083  663506 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-860024' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-860024/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-860024' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 13:01:46.824416  663506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 13:01:46.824451  663506 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20383-625153/.minikube CaCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20383-625153/.minikube}
	I0210 13:01:46.824479  663506 buildroot.go:174] setting up certificates
	I0210 13:01:46.824492  663506 provision.go:84] configureAuth start
	I0210 13:01:46.824505  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetMachineName
	I0210 13:01:46.824750  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetIP
	I0210 13:01:46.827339  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:46.827677  663506 main.go:141] libmachine: (test-preload-860024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:ca:83", ip: ""} in network mk-test-preload-860024: {Iface:virbr1 ExpiryTime:2025-02-10 14:01:38 +0000 UTC Type:0 Mac:52:54:00:3e:ca:83 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-860024 Clientid:01:52:54:00:3e:ca:83}
	I0210 13:01:46.827713  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined IP address 192.168.39.223 and MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:46.827816  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHHostname
	I0210 13:01:46.830062  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:46.830394  663506 main.go:141] libmachine: (test-preload-860024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:ca:83", ip: ""} in network mk-test-preload-860024: {Iface:virbr1 ExpiryTime:2025-02-10 14:01:38 +0000 UTC Type:0 Mac:52:54:00:3e:ca:83 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-860024 Clientid:01:52:54:00:3e:ca:83}
	I0210 13:01:46.830419  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined IP address 192.168.39.223 and MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:46.830561  663506 provision.go:143] copyHostCerts
	I0210 13:01:46.830634  663506 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem, removing ...
	I0210 13:01:46.830648  663506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem
	I0210 13:01:46.830733  663506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem (1082 bytes)
	I0210 13:01:46.830841  663506 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem, removing ...
	I0210 13:01:46.830851  663506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem
	I0210 13:01:46.830891  663506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem (1123 bytes)
	I0210 13:01:46.831034  663506 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem, removing ...
	I0210 13:01:46.831047  663506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem
	I0210 13:01:46.831089  663506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem (1675 bytes)
	I0210 13:01:46.831165  663506 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem org=jenkins.test-preload-860024 san=[127.0.0.1 192.168.39.223 localhost minikube test-preload-860024]
	I0210 13:01:46.872647  663506 provision.go:177] copyRemoteCerts
	I0210 13:01:46.872733  663506 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 13:01:46.872769  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHHostname
	I0210 13:01:46.874955  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:46.875294  663506 main.go:141] libmachine: (test-preload-860024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:ca:83", ip: ""} in network mk-test-preload-860024: {Iface:virbr1 ExpiryTime:2025-02-10 14:01:38 +0000 UTC Type:0 Mac:52:54:00:3e:ca:83 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-860024 Clientid:01:52:54:00:3e:ca:83}
	I0210 13:01:46.875318  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined IP address 192.168.39.223 and MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:46.875492  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHPort
	I0210 13:01:46.875684  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHKeyPath
	I0210 13:01:46.875814  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHUsername
	I0210 13:01:46.875967  663506 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/test-preload-860024/id_rsa Username:docker}
	I0210 13:01:46.954972  663506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 13:01:46.977072  663506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0210 13:01:46.998303  663506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 13:01:47.019536  663506 provision.go:87] duration metric: took 195.024946ms to configureAuth
	I0210 13:01:47.019571  663506 buildroot.go:189] setting minikube options for container-runtime
	I0210 13:01:47.019798  663506 config.go:182] Loaded profile config "test-preload-860024": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0210 13:01:47.019902  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHHostname
	I0210 13:01:47.022771  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:47.023123  663506 main.go:141] libmachine: (test-preload-860024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:ca:83", ip: ""} in network mk-test-preload-860024: {Iface:virbr1 ExpiryTime:2025-02-10 14:01:38 +0000 UTC Type:0 Mac:52:54:00:3e:ca:83 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-860024 Clientid:01:52:54:00:3e:ca:83}
	I0210 13:01:47.023154  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined IP address 192.168.39.223 and MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:47.023324  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHPort
	I0210 13:01:47.023496  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHKeyPath
	I0210 13:01:47.023657  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHKeyPath
	I0210 13:01:47.023813  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHUsername
	I0210 13:01:47.023989  663506 main.go:141] libmachine: Using SSH client type: native
	I0210 13:01:47.024149  663506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0210 13:01:47.024163  663506 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 13:01:47.237834  663506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 13:01:47.237870  663506 machine.go:96] duration metric: took 743.924653ms to provisionDockerMachine
	I0210 13:01:47.237884  663506 start.go:293] postStartSetup for "test-preload-860024" (driver="kvm2")
	I0210 13:01:47.237895  663506 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 13:01:47.237918  663506 main.go:141] libmachine: (test-preload-860024) Calling .DriverName
	I0210 13:01:47.238282  663506 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 13:01:47.238336  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHHostname
	I0210 13:01:47.240811  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:47.241136  663506 main.go:141] libmachine: (test-preload-860024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:ca:83", ip: ""} in network mk-test-preload-860024: {Iface:virbr1 ExpiryTime:2025-02-10 14:01:38 +0000 UTC Type:0 Mac:52:54:00:3e:ca:83 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-860024 Clientid:01:52:54:00:3e:ca:83}
	I0210 13:01:47.241160  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined IP address 192.168.39.223 and MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:47.241361  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHPort
	I0210 13:01:47.241641  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHKeyPath
	I0210 13:01:47.241832  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHUsername
	I0210 13:01:47.241986  663506 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/test-preload-860024/id_rsa Username:docker}
	I0210 13:01:47.323227  663506 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 13:01:47.327115  663506 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 13:01:47.327140  663506 filesync.go:126] Scanning /home/jenkins/minikube-integration/20383-625153/.minikube/addons for local assets ...
	I0210 13:01:47.327201  663506 filesync.go:126] Scanning /home/jenkins/minikube-integration/20383-625153/.minikube/files for local assets ...
	I0210 13:01:47.327279  663506 filesync.go:149] local asset: /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem -> 6323522.pem in /etc/ssl/certs
	I0210 13:01:47.327375  663506 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 13:01:47.336086  663506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem --> /etc/ssl/certs/6323522.pem (1708 bytes)
	I0210 13:01:47.357651  663506 start.go:296] duration metric: took 119.750411ms for postStartSetup
	I0210 13:01:47.357706  663506 fix.go:56] duration metric: took 20.235308457s for fixHost
	I0210 13:01:47.357729  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHHostname
	I0210 13:01:47.360818  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:47.361213  663506 main.go:141] libmachine: (test-preload-860024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:ca:83", ip: ""} in network mk-test-preload-860024: {Iface:virbr1 ExpiryTime:2025-02-10 14:01:38 +0000 UTC Type:0 Mac:52:54:00:3e:ca:83 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-860024 Clientid:01:52:54:00:3e:ca:83}
	I0210 13:01:47.361245  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined IP address 192.168.39.223 and MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:47.361413  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHPort
	I0210 13:01:47.361580  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHKeyPath
	I0210 13:01:47.361695  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHKeyPath
	I0210 13:01:47.361796  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHUsername
	I0210 13:01:47.361920  663506 main.go:141] libmachine: Using SSH client type: native
	I0210 13:01:47.362096  663506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0210 13:01:47.362106  663506 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 13:01:47.461567  663506 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739192507.437631719
	
	I0210 13:01:47.461597  663506 fix.go:216] guest clock: 1739192507.437631719
	I0210 13:01:47.461607  663506 fix.go:229] Guest: 2025-02-10 13:01:47.437631719 +0000 UTC Remote: 2025-02-10 13:01:47.357710584 +0000 UTC m=+23.714385223 (delta=79.921135ms)
	I0210 13:01:47.461658  663506 fix.go:200] guest clock delta is within tolerance: 79.921135ms
	I0210 13:01:47.461666  663506 start.go:83] releasing machines lock for "test-preload-860024", held for 20.339293148s
	I0210 13:01:47.461698  663506 main.go:141] libmachine: (test-preload-860024) Calling .DriverName
	I0210 13:01:47.461996  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetIP
	I0210 13:01:47.464679  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:47.464996  663506 main.go:141] libmachine: (test-preload-860024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:ca:83", ip: ""} in network mk-test-preload-860024: {Iface:virbr1 ExpiryTime:2025-02-10 14:01:38 +0000 UTC Type:0 Mac:52:54:00:3e:ca:83 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-860024 Clientid:01:52:54:00:3e:ca:83}
	I0210 13:01:47.465031  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined IP address 192.168.39.223 and MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:47.465152  663506 main.go:141] libmachine: (test-preload-860024) Calling .DriverName
	I0210 13:01:47.465620  663506 main.go:141] libmachine: (test-preload-860024) Calling .DriverName
	I0210 13:01:47.465803  663506 main.go:141] libmachine: (test-preload-860024) Calling .DriverName
	I0210 13:01:47.465908  663506 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 13:01:47.465964  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHHostname
	I0210 13:01:47.466015  663506 ssh_runner.go:195] Run: cat /version.json
	I0210 13:01:47.466036  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHHostname
	I0210 13:01:47.468756  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:47.468891  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:47.469180  663506 main.go:141] libmachine: (test-preload-860024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:ca:83", ip: ""} in network mk-test-preload-860024: {Iface:virbr1 ExpiryTime:2025-02-10 14:01:38 +0000 UTC Type:0 Mac:52:54:00:3e:ca:83 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-860024 Clientid:01:52:54:00:3e:ca:83}
	I0210 13:01:47.469211  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined IP address 192.168.39.223 and MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:47.469243  663506 main.go:141] libmachine: (test-preload-860024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:ca:83", ip: ""} in network mk-test-preload-860024: {Iface:virbr1 ExpiryTime:2025-02-10 14:01:38 +0000 UTC Type:0 Mac:52:54:00:3e:ca:83 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-860024 Clientid:01:52:54:00:3e:ca:83}
	I0210 13:01:47.469265  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined IP address 192.168.39.223 and MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:47.469333  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHPort
	I0210 13:01:47.469506  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHPort
	I0210 13:01:47.469538  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHKeyPath
	I0210 13:01:47.469682  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHKeyPath
	I0210 13:01:47.469805  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHUsername
	I0210 13:01:47.469863  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHUsername
	I0210 13:01:47.469954  663506 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/test-preload-860024/id_rsa Username:docker}
	I0210 13:01:47.470000  663506 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/test-preload-860024/id_rsa Username:docker}
	I0210 13:01:47.566516  663506 ssh_runner.go:195] Run: systemctl --version
	I0210 13:01:47.572245  663506 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 13:01:47.719281  663506 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 13:01:47.725257  663506 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 13:01:47.725338  663506 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 13:01:47.740388  663506 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 13:01:47.740412  663506 start.go:495] detecting cgroup driver to use...
	I0210 13:01:47.740476  663506 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 13:01:47.755370  663506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 13:01:47.768329  663506 docker.go:217] disabling cri-docker service (if available) ...
	I0210 13:01:47.768402  663506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 13:01:47.780756  663506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 13:01:47.792985  663506 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 13:01:47.899768  663506 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 13:01:48.035439  663506 docker.go:233] disabling docker service ...
	I0210 13:01:48.035508  663506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 13:01:48.049302  663506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 13:01:48.060946  663506 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 13:01:48.190843  663506 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 13:01:48.320904  663506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 13:01:48.333713  663506 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 13:01:48.351235  663506 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0210 13:01:48.351304  663506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:01:48.361512  663506 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 13:01:48.361587  663506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:01:48.371841  663506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:01:48.381928  663506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:01:48.395092  663506 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 13:01:48.405705  663506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:01:48.415580  663506 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:01:48.431558  663506 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:01:48.441871  663506 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 13:01:48.450477  663506 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 13:01:48.450569  663506 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 13:01:48.462785  663506 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 13:01:48.471290  663506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:01:48.581084  663506 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 13:01:48.668620  663506 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 13:01:48.668711  663506 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 13:01:48.673032  663506 start.go:563] Will wait 60s for crictl version
	I0210 13:01:48.673096  663506 ssh_runner.go:195] Run: which crictl
	I0210 13:01:48.676698  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 13:01:48.711753  663506 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 13:01:48.711833  663506 ssh_runner.go:195] Run: crio --version
	I0210 13:01:48.737637  663506 ssh_runner.go:195] Run: crio --version
	I0210 13:01:48.764347  663506 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0210 13:01:48.765916  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetIP
	I0210 13:01:48.768552  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:48.768884  663506 main.go:141] libmachine: (test-preload-860024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:ca:83", ip: ""} in network mk-test-preload-860024: {Iface:virbr1 ExpiryTime:2025-02-10 14:01:38 +0000 UTC Type:0 Mac:52:54:00:3e:ca:83 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-860024 Clientid:01:52:54:00:3e:ca:83}
	I0210 13:01:48.768910  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined IP address 192.168.39.223 and MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:01:48.769227  663506 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0210 13:01:48.773055  663506 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:01:48.784822  663506 kubeadm.go:883] updating cluster {Name:test-preload-860024 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-860024 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 13:01:48.784988  663506 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0210 13:01:48.785066  663506 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:01:48.820413  663506 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0210 13:01:48.820495  663506 ssh_runner.go:195] Run: which lz4
	I0210 13:01:48.824048  663506 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 13:01:48.827803  663506 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 13:01:48.827831  663506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0210 13:01:50.210349  663506 crio.go:462] duration metric: took 1.386319095s to copy over tarball
	I0210 13:01:50.210445  663506 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 13:01:52.532690  663506 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.322206931s)
	I0210 13:01:52.532723  663506 crio.go:469] duration metric: took 2.322336974s to extract the tarball
	I0210 13:01:52.532733  663506 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 13:01:52.572708  663506 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:01:52.611199  663506 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0210 13:01:52.611228  663506 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0210 13:01:52.611317  663506 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:01:52.611338  663506 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0210 13:01:52.611375  663506 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0210 13:01:52.611400  663506 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0210 13:01:52.611351  663506 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0210 13:01:52.611322  663506 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0210 13:01:52.611375  663506 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0210 13:01:52.611378  663506 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0210 13:01:52.612925  663506 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0210 13:01:52.612954  663506 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:01:52.612954  663506 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0210 13:01:52.612938  663506 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0210 13:01:52.612985  663506 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0210 13:01:52.613004  663506 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0210 13:01:52.613009  663506 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0210 13:01:52.613016  663506 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0210 13:01:52.750470  663506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0210 13:01:52.761603  663506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0210 13:01:52.761603  663506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0210 13:01:52.762255  663506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0210 13:01:52.763289  663506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0210 13:01:52.786016  663506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0210 13:01:52.792065  663506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0210 13:01:52.833900  663506 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0210 13:01:52.833960  663506 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0210 13:01:52.834026  663506 ssh_runner.go:195] Run: which crictl
	I0210 13:01:52.874131  663506 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0210 13:01:52.874181  663506 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0210 13:01:52.874189  663506 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0210 13:01:52.874213  663506 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0210 13:01:52.874237  663506 ssh_runner.go:195] Run: which crictl
	I0210 13:01:52.874137  663506 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0210 13:01:52.874257  663506 ssh_runner.go:195] Run: which crictl
	I0210 13:01:52.874266  663506 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0210 13:01:52.874189  663506 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0210 13:01:52.874298  663506 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0210 13:01:52.874301  663506 ssh_runner.go:195] Run: which crictl
	I0210 13:01:52.874320  663506 ssh_runner.go:195] Run: which crictl
	I0210 13:01:52.907038  663506 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0210 13:01:52.907106  663506 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0210 13:01:52.907164  663506 ssh_runner.go:195] Run: which crictl
	I0210 13:01:52.909379  663506 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0210 13:01:52.909415  663506 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0210 13:01:52.909432  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0210 13:01:52.909456  663506 ssh_runner.go:195] Run: which crictl
	I0210 13:01:52.909567  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0210 13:01:52.909595  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0210 13:01:52.909624  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0210 13:01:52.909676  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0210 13:01:52.919458  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0210 13:01:53.019571  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0210 13:01:53.019571  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0210 13:01:53.053702  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0210 13:01:53.053726  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0210 13:01:53.062334  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0210 13:01:53.062387  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0210 13:01:53.062540  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0210 13:01:53.124343  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0210 13:01:53.130347  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0210 13:01:53.214494  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0210 13:01:53.214562  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0210 13:01:53.215587  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0210 13:01:53.215668  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0210 13:01:53.215670  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0210 13:01:53.252389  663506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0210 13:01:53.318333  663506 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0210 13:01:53.318472  663506 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0210 13:01:53.331199  663506 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0210 13:01:53.331325  663506 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0210 13:01:53.369658  663506 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0210 13:01:53.369786  663506 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0210 13:01:53.373344  663506 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0210 13:01:53.373450  663506 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0210 13:01:53.374837  663506 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0210 13:01:53.374903  663506 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0210 13:01:53.374919  663506 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0210 13:01:53.374989  663506 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0210 13:01:53.378591  663506 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0210 13:01:53.378651  663506 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0210 13:01:53.378672  663506 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0210 13:01:53.378687  663506 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0210 13:01:53.378694  663506 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0210 13:01:53.378760  663506 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0210 13:01:53.381954  663506 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0210 13:01:53.387813  663506 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0210 13:01:53.388283  663506 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0210 13:01:53.388309  663506 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0210 13:01:53.388486  663506 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0210 13:01:53.451740  663506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:01:56.742981  663506 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.364185726s)
	I0210 13:01:56.743020  663506 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0210 13:01:56.743044  663506 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0210 13:01:56.743050  663506 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.291265728s)
	I0210 13:01:56.743092  663506 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0210 13:01:56.885029  663506 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0210 13:01:56.885080  663506 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0210 13:01:56.885157  663506 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0210 13:01:57.519996  663506 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0210 13:01:57.520054  663506 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0210 13:01:57.520150  663506 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0210 13:01:57.855995  663506 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0210 13:01:57.856052  663506 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0210 13:01:57.856136  663506 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0210 13:01:59.905986  663506 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.049820942s)
	I0210 13:01:59.906023  663506 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0210 13:01:59.906054  663506 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0210 13:01:59.906128  663506 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0210 13:02:00.747820  663506 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0210 13:02:00.747882  663506 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0210 13:02:00.747959  663506 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0210 13:02:01.195452  663506 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0210 13:02:01.195506  663506 cache_images.go:123] Successfully loaded all cached images
	I0210 13:02:01.195512  663506 cache_images.go:92] duration metric: took 8.584271588s to LoadCachedImages
	I0210 13:02:01.195525  663506 kubeadm.go:934] updating node { 192.168.39.223 8443 v1.24.4 crio true true} ...
	I0210 13:02:01.195632  663506 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-860024 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-860024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 13:02:01.195719  663506 ssh_runner.go:195] Run: crio config
	I0210 13:02:01.240387  663506 cni.go:84] Creating CNI manager for ""
	I0210 13:02:01.240415  663506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:02:01.240425  663506 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 13:02:01.240445  663506 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.223 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-860024 NodeName:test-preload-860024 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 13:02:01.240606  663506 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-860024"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 13:02:01.240681  663506 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0210 13:02:01.249957  663506 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 13:02:01.250022  663506 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 13:02:01.258274  663506 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0210 13:02:01.272951  663506 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 13:02:01.287226  663506 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0210 13:02:01.301927  663506 ssh_runner.go:195] Run: grep 192.168.39.223	control-plane.minikube.internal$ /etc/hosts
	I0210 13:02:01.305179  663506 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:02:01.315483  663506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:02:01.432312  663506 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:02:01.447736  663506 certs.go:68] Setting up /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/test-preload-860024 for IP: 192.168.39.223
	I0210 13:02:01.447760  663506 certs.go:194] generating shared ca certs ...
	I0210 13:02:01.447780  663506 certs.go:226] acquiring lock for ca certs: {Name:mkf2f72f82a6bd7e3c16bb224cd26b80c3c89e0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:02:01.447987  663506 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key
	I0210 13:02:01.448045  663506 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key
	I0210 13:02:01.448059  663506 certs.go:256] generating profile certs ...
	I0210 13:02:01.448168  663506 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/test-preload-860024/client.key
	I0210 13:02:01.448256  663506 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/test-preload-860024/apiserver.key.62ca57ba
	I0210 13:02:01.448312  663506 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/test-preload-860024/proxy-client.key
	I0210 13:02:01.448458  663506 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352.pem (1338 bytes)
	W0210 13:02:01.448490  663506 certs.go:480] ignoring /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352_empty.pem, impossibly tiny 0 bytes
	I0210 13:02:01.448500  663506 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem (1675 bytes)
	I0210 13:02:01.448521  663506 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem (1082 bytes)
	I0210 13:02:01.448557  663506 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem (1123 bytes)
	I0210 13:02:01.448578  663506 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem (1675 bytes)
	I0210 13:02:01.448613  663506 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem (1708 bytes)
	I0210 13:02:01.449255  663506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 13:02:01.485058  663506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 13:02:01.520309  663506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 13:02:01.556303  663506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 13:02:01.586882  663506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/test-preload-860024/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0210 13:02:01.615761  663506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/test-preload-860024/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0210 13:02:01.653507  663506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/test-preload-860024/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 13:02:01.677616  663506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/test-preload-860024/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0210 13:02:01.698807  663506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 13:02:01.719768  663506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352.pem --> /usr/share/ca-certificates/632352.pem (1338 bytes)
	I0210 13:02:01.743699  663506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem --> /usr/share/ca-certificates/6323522.pem (1708 bytes)
	I0210 13:02:01.767310  663506 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 13:02:01.783612  663506 ssh_runner.go:195] Run: openssl version
	I0210 13:02:01.788955  663506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 13:02:01.798244  663506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:02:01.802120  663506 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:02:01.802174  663506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:02:01.807525  663506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 13:02:01.817201  663506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/632352.pem && ln -fs /usr/share/ca-certificates/632352.pem /etc/ssl/certs/632352.pem"
	I0210 13:02:01.826745  663506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/632352.pem
	I0210 13:02:01.830705  663506 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 12:13 /usr/share/ca-certificates/632352.pem
	I0210 13:02:01.830762  663506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/632352.pem
	I0210 13:02:01.835740  663506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/632352.pem /etc/ssl/certs/51391683.0"
	I0210 13:02:01.845172  663506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6323522.pem && ln -fs /usr/share/ca-certificates/6323522.pem /etc/ssl/certs/6323522.pem"
	I0210 13:02:01.854490  663506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6323522.pem
	I0210 13:02:01.858504  663506 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 12:13 /usr/share/ca-certificates/6323522.pem
	I0210 13:02:01.858557  663506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6323522.pem
	I0210 13:02:01.863580  663506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6323522.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 13:02:01.872957  663506 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 13:02:01.876917  663506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 13:02:01.882210  663506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 13:02:01.887618  663506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 13:02:01.892964  663506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 13:02:01.898262  663506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 13:02:01.903489  663506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 13:02:01.908697  663506 kubeadm.go:392] StartCluster: {Name:test-preload-860024 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-860024 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:02:01.908782  663506 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 13:02:01.908841  663506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:02:01.948271  663506 cri.go:89] found id: ""
	I0210 13:02:01.948344  663506 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 13:02:01.957832  663506 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0210 13:02:01.957854  663506 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0210 13:02:01.957897  663506 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0210 13:02:01.966853  663506 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0210 13:02:01.967474  663506 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-860024" does not appear in /home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:02:01.967644  663506 kubeconfig.go:62] /home/jenkins/minikube-integration/20383-625153/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-860024" cluster setting kubeconfig missing "test-preload-860024" context setting]
	I0210 13:02:01.968096  663506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/kubeconfig: {Name:mke7ef1ff4ff1259856291979fdd0337df3a08b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:02:01.968904  663506 kapi.go:59] client config for test-preload-860024: &rest.Config{Host:"https://192.168.39.223:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20383-625153/.minikube/profiles/test-preload-860024/client.crt", KeyFile:"/home/jenkins/minikube-integration/20383-625153/.minikube/profiles/test-preload-860024/client.key", CAFile:"/home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24db320), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0210 13:02:01.969396  663506 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0210 13:02:01.969417  663506 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0210 13:02:01.969429  663506 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0210 13:02:01.969436  663506 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0210 13:02:01.969909  663506 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0210 13:02:01.978748  663506 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.223
	I0210 13:02:01.978781  663506 kubeadm.go:1160] stopping kube-system containers ...
	I0210 13:02:01.978795  663506 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0210 13:02:01.978859  663506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:02:02.013225  663506 cri.go:89] found id: ""
	I0210 13:02:02.013317  663506 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0210 13:02:02.028935  663506 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:02:02.037857  663506 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:02:02.037877  663506 kubeadm.go:157] found existing configuration files:
	
	I0210 13:02:02.037920  663506 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:02:02.046281  663506 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:02:02.046332  663506 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:02:02.054833  663506 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:02:02.063014  663506 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:02:02.063065  663506 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:02:02.071313  663506 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:02:02.079397  663506 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:02:02.079479  663506 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:02:02.087815  663506 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:02:02.095761  663506 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:02:02.095827  663506 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:02:02.104109  663506 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 13:02:02.112777  663506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:02:02.199295  663506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:02:03.026950  663506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:02:03.286077  663506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:02:03.362151  663506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:02:03.446286  663506 api_server.go:52] waiting for apiserver process to appear ...
	I0210 13:02:03.446377  663506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:02:03.947265  663506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:02:04.447310  663506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:02:04.465159  663506 api_server.go:72] duration metric: took 1.018868701s to wait for apiserver process to appear ...
	I0210 13:02:04.465202  663506 api_server.go:88] waiting for apiserver healthz status ...
	I0210 13:02:04.465230  663506 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8443/healthz ...
	I0210 13:02:04.465796  663506 api_server.go:269] stopped: https://192.168.39.223:8443/healthz: Get "https://192.168.39.223:8443/healthz": dial tcp 192.168.39.223:8443: connect: connection refused
	I0210 13:02:04.965597  663506 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8443/healthz ...
	I0210 13:02:04.966278  663506 api_server.go:269] stopped: https://192.168.39.223:8443/healthz: Get "https://192.168.39.223:8443/healthz": dial tcp 192.168.39.223:8443: connect: connection refused
	I0210 13:02:05.465987  663506 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8443/healthz ...
	I0210 13:02:08.210637  663506 api_server.go:279] https://192.168.39.223:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 13:02:08.210668  663506 api_server.go:103] status: https://192.168.39.223:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 13:02:08.210686  663506 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8443/healthz ...
	I0210 13:02:08.273077  663506 api_server.go:279] https://192.168.39.223:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 13:02:08.273122  663506 api_server.go:103] status: https://192.168.39.223:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 13:02:08.465395  663506 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8443/healthz ...
	I0210 13:02:08.471524  663506 api_server.go:279] https://192.168.39.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 13:02:08.471561  663506 api_server.go:103] status: https://192.168.39.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 13:02:08.965382  663506 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8443/healthz ...
	I0210 13:02:08.971301  663506 api_server.go:279] https://192.168.39.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 13:02:08.971347  663506 api_server.go:103] status: https://192.168.39.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 13:02:09.465897  663506 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8443/healthz ...
	I0210 13:02:09.483654  663506 api_server.go:279] https://192.168.39.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 13:02:09.483692  663506 api_server.go:103] status: https://192.168.39.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 13:02:09.966270  663506 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8443/healthz ...
	I0210 13:02:09.971687  663506 api_server.go:279] https://192.168.39.223:8443/healthz returned 200:
	ok
	I0210 13:02:09.978048  663506 api_server.go:141] control plane version: v1.24.4
	I0210 13:02:09.978077  663506 api_server.go:131] duration metric: took 5.512867505s to wait for apiserver health ...
	I0210 13:02:09.978088  663506 cni.go:84] Creating CNI manager for ""
	I0210 13:02:09.978097  663506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:02:09.979904  663506 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0210 13:02:09.981370  663506 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0210 13:02:09.991225  663506 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0210 13:02:10.008986  663506 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 13:02:10.012600  663506 system_pods.go:59] 7 kube-system pods found
	I0210 13:02:10.012652  663506 system_pods.go:61] "coredns-6d4b75cb6d-xv89c" [48ee27af-325d-48fa-96b6-cc5fa9328eba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0210 13:02:10.012664  663506 system_pods.go:61] "etcd-test-preload-860024" [8bb885d9-e054-4620-b32d-47b59a2eb549] Running
	I0210 13:02:10.012674  663506 system_pods.go:61] "kube-apiserver-test-preload-860024" [9db9c4c6-0673-4870-8bdd-3ca4a455a03d] Running
	I0210 13:02:10.012680  663506 system_pods.go:61] "kube-controller-manager-test-preload-860024" [b16fb946-460a-43e7-aa56-26eda359eacd] Running
	I0210 13:02:10.012688  663506 system_pods.go:61] "kube-proxy-4276s" [fc531182-23ca-4c28-88cb-1860b31cb5f1] Running
	I0210 13:02:10.012693  663506 system_pods.go:61] "kube-scheduler-test-preload-860024" [2b6a21ee-4572-4d25-b520-b8d8487d886d] Running
	I0210 13:02:10.012702  663506 system_pods.go:61] "storage-provisioner" [494ddd3b-05d4-40ea-b4e3-53a6af6ae94d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0210 13:02:10.012712  663506 system_pods.go:74] duration metric: took 3.697145ms to wait for pod list to return data ...
	I0210 13:02:10.012738  663506 node_conditions.go:102] verifying NodePressure condition ...
	I0210 13:02:10.014779  663506 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 13:02:10.014806  663506 node_conditions.go:123] node cpu capacity is 2
	I0210 13:02:10.014823  663506 node_conditions.go:105] duration metric: took 2.078283ms to run NodePressure ...
	I0210 13:02:10.014854  663506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:02:10.158957  663506 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0210 13:02:10.161970  663506 retry.go:31] will retry after 185.081818ms: kubelet not initialised
	I0210 13:02:10.351073  663506 retry.go:31] will retry after 271.663011ms: kubelet not initialised
	I0210 13:02:10.628145  663506 retry.go:31] will retry after 712.859389ms: kubelet not initialised
	I0210 13:02:11.348086  663506 retry.go:31] will retry after 622.466773ms: kubelet not initialised
	I0210 13:02:11.974709  663506 retry.go:31] will retry after 849.143305ms: kubelet not initialised
	I0210 13:02:12.828962  663506 retry.go:31] will retry after 2.726845041s: kubelet not initialised
	I0210 13:02:15.561063  663506 kubeadm.go:739] kubelet initialised
	I0210 13:02:15.561095  663506 kubeadm.go:740] duration metric: took 5.402102888s waiting for restarted kubelet to initialise ...
	I0210 13:02:15.561126  663506 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 13:02:15.563979  663506 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-xv89c" in "kube-system" namespace to be "Ready" ...
	I0210 13:02:15.568397  663506 pod_ready.go:98] node "test-preload-860024" hosting pod "coredns-6d4b75cb6d-xv89c" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-860024" has status "Ready":"False"
	I0210 13:02:15.568421  663506 pod_ready.go:82] duration metric: took 4.415997ms for pod "coredns-6d4b75cb6d-xv89c" in "kube-system" namespace to be "Ready" ...
	E0210 13:02:15.568431  663506 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-860024" hosting pod "coredns-6d4b75cb6d-xv89c" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-860024" has status "Ready":"False"
	I0210 13:02:15.568438  663506 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-860024" in "kube-system" namespace to be "Ready" ...
	I0210 13:02:15.572170  663506 pod_ready.go:98] node "test-preload-860024" hosting pod "etcd-test-preload-860024" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-860024" has status "Ready":"False"
	I0210 13:02:15.572193  663506 pod_ready.go:82] duration metric: took 3.746951ms for pod "etcd-test-preload-860024" in "kube-system" namespace to be "Ready" ...
	E0210 13:02:15.572202  663506 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-860024" hosting pod "etcd-test-preload-860024" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-860024" has status "Ready":"False"
	I0210 13:02:15.572208  663506 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-860024" in "kube-system" namespace to be "Ready" ...
	I0210 13:02:15.575466  663506 pod_ready.go:98] node "test-preload-860024" hosting pod "kube-apiserver-test-preload-860024" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-860024" has status "Ready":"False"
	I0210 13:02:15.575486  663506 pod_ready.go:82] duration metric: took 3.269481ms for pod "kube-apiserver-test-preload-860024" in "kube-system" namespace to be "Ready" ...
	E0210 13:02:15.575493  663506 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-860024" hosting pod "kube-apiserver-test-preload-860024" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-860024" has status "Ready":"False"
	I0210 13:02:15.575499  663506 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-860024" in "kube-system" namespace to be "Ready" ...
	I0210 13:02:15.578814  663506 pod_ready.go:98] node "test-preload-860024" hosting pod "kube-controller-manager-test-preload-860024" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-860024" has status "Ready":"False"
	I0210 13:02:15.578843  663506 pod_ready.go:82] duration metric: took 3.334613ms for pod "kube-controller-manager-test-preload-860024" in "kube-system" namespace to be "Ready" ...
	E0210 13:02:15.578852  663506 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-860024" hosting pod "kube-controller-manager-test-preload-860024" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-860024" has status "Ready":"False"
	I0210 13:02:15.578860  663506 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4276s" in "kube-system" namespace to be "Ready" ...
	I0210 13:02:15.963120  663506 pod_ready.go:98] node "test-preload-860024" hosting pod "kube-proxy-4276s" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-860024" has status "Ready":"False"
	I0210 13:02:15.963166  663506 pod_ready.go:82] duration metric: took 384.296128ms for pod "kube-proxy-4276s" in "kube-system" namespace to be "Ready" ...
	E0210 13:02:15.963184  663506 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-860024" hosting pod "kube-proxy-4276s" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-860024" has status "Ready":"False"
	I0210 13:02:15.963195  663506 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-860024" in "kube-system" namespace to be "Ready" ...
	I0210 13:02:16.361168  663506 pod_ready.go:98] node "test-preload-860024" hosting pod "kube-scheduler-test-preload-860024" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-860024" has status "Ready":"False"
	I0210 13:02:16.361207  663506 pod_ready.go:82] duration metric: took 398.002824ms for pod "kube-scheduler-test-preload-860024" in "kube-system" namespace to be "Ready" ...
	E0210 13:02:16.361222  663506 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-860024" hosting pod "kube-scheduler-test-preload-860024" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-860024" has status "Ready":"False"
	I0210 13:02:16.361232  663506 pod_ready.go:39] duration metric: took 800.093016ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 13:02:16.361270  663506 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 13:02:16.386880  663506 ops.go:34] apiserver oom_adj: -16
	I0210 13:02:16.386912  663506 kubeadm.go:597] duration metric: took 14.429053113s to restartPrimaryControlPlane
	I0210 13:02:16.386922  663506 kubeadm.go:394] duration metric: took 14.478235631s to StartCluster
	I0210 13:02:16.386941  663506 settings.go:142] acquiring lock: {Name:mk4bd8331d641665e48ff1d1c4382f2e915609be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:02:16.387024  663506 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:02:16.387703  663506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/kubeconfig: {Name:mke7ef1ff4ff1259856291979fdd0337df3a08b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:02:16.387969  663506 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.223 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 13:02:16.388056  663506 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 13:02:16.388200  663506 addons.go:69] Setting storage-provisioner=true in profile "test-preload-860024"
	I0210 13:02:16.388226  663506 addons.go:238] Setting addon storage-provisioner=true in "test-preload-860024"
	I0210 13:02:16.388223  663506 config.go:182] Loaded profile config "test-preload-860024": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	W0210 13:02:16.388235  663506 addons.go:247] addon storage-provisioner should already be in state true
	I0210 13:02:16.388226  663506 addons.go:69] Setting default-storageclass=true in profile "test-preload-860024"
	I0210 13:02:16.388292  663506 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-860024"
	I0210 13:02:16.388272  663506 host.go:66] Checking if "test-preload-860024" exists ...
	I0210 13:02:16.388705  663506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:02:16.388756  663506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:02:16.388801  663506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:02:16.388841  663506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:02:16.389958  663506 out.go:177] * Verifying Kubernetes components...
	I0210 13:02:16.391492  663506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:02:16.405288  663506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41625
	I0210 13:02:16.405298  663506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44855
	I0210 13:02:16.405873  663506 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:02:16.405934  663506 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:02:16.406539  663506 main.go:141] libmachine: Using API Version  1
	I0210 13:02:16.406556  663506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:02:16.406554  663506 main.go:141] libmachine: Using API Version  1
	I0210 13:02:16.406573  663506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:02:16.406937  663506 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:02:16.407014  663506 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:02:16.407208  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetState
	I0210 13:02:16.407561  663506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:02:16.407611  663506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:02:16.409745  663506 kapi.go:59] client config for test-preload-860024: &rest.Config{Host:"https://192.168.39.223:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20383-625153/.minikube/profiles/test-preload-860024/client.crt", KeyFile:"/home/jenkins/minikube-integration/20383-625153/.minikube/profiles/test-preload-860024/client.key", CAFile:"/home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24db320), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0210 13:02:16.410139  663506 addons.go:238] Setting addon default-storageclass=true in "test-preload-860024"
	W0210 13:02:16.410160  663506 addons.go:247] addon default-storageclass should already be in state true
	I0210 13:02:16.410197  663506 host.go:66] Checking if "test-preload-860024" exists ...
	I0210 13:02:16.410619  663506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:02:16.410694  663506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:02:16.424019  663506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35701
	I0210 13:02:16.424536  663506 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:02:16.425178  663506 main.go:141] libmachine: Using API Version  1
	I0210 13:02:16.425210  663506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:02:16.425592  663506 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:02:16.425811  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetState
	I0210 13:02:16.427097  663506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39735
	I0210 13:02:16.427664  663506 main.go:141] libmachine: (test-preload-860024) Calling .DriverName
	I0210 13:02:16.427666  663506 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:02:16.428278  663506 main.go:141] libmachine: Using API Version  1
	I0210 13:02:16.428307  663506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:02:16.428708  663506 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:02:16.429262  663506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:02:16.429307  663506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:02:16.429980  663506 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:02:16.431373  663506 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 13:02:16.431390  663506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 13:02:16.431411  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHHostname
	I0210 13:02:16.434730  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:02:16.435123  663506 main.go:141] libmachine: (test-preload-860024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:ca:83", ip: ""} in network mk-test-preload-860024: {Iface:virbr1 ExpiryTime:2025-02-10 14:01:38 +0000 UTC Type:0 Mac:52:54:00:3e:ca:83 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-860024 Clientid:01:52:54:00:3e:ca:83}
	I0210 13:02:16.435146  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined IP address 192.168.39.223 and MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:02:16.435286  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHPort
	I0210 13:02:16.435479  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHKeyPath
	I0210 13:02:16.435613  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHUsername
	I0210 13:02:16.435744  663506 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/test-preload-860024/id_rsa Username:docker}
	I0210 13:02:16.464784  663506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35297
	I0210 13:02:16.465297  663506 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:02:16.465850  663506 main.go:141] libmachine: Using API Version  1
	I0210 13:02:16.465884  663506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:02:16.466207  663506 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:02:16.466477  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetState
	I0210 13:02:16.468299  663506 main.go:141] libmachine: (test-preload-860024) Calling .DriverName
	I0210 13:02:16.468564  663506 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 13:02:16.468583  663506 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 13:02:16.468600  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHHostname
	I0210 13:02:16.471422  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:02:16.471926  663506 main.go:141] libmachine: (test-preload-860024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:ca:83", ip: ""} in network mk-test-preload-860024: {Iface:virbr1 ExpiryTime:2025-02-10 14:01:38 +0000 UTC Type:0 Mac:52:54:00:3e:ca:83 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-860024 Clientid:01:52:54:00:3e:ca:83}
	I0210 13:02:16.471956  663506 main.go:141] libmachine: (test-preload-860024) DBG | domain test-preload-860024 has defined IP address 192.168.39.223 and MAC address 52:54:00:3e:ca:83 in network mk-test-preload-860024
	I0210 13:02:16.472106  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHPort
	I0210 13:02:16.472357  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHKeyPath
	I0210 13:02:16.472548  663506 main.go:141] libmachine: (test-preload-860024) Calling .GetSSHUsername
	I0210 13:02:16.472713  663506 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/test-preload-860024/id_rsa Username:docker}
	I0210 13:02:16.602316  663506 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:02:16.618967  663506 node_ready.go:35] waiting up to 6m0s for node "test-preload-860024" to be "Ready" ...
	I0210 13:02:16.751809  663506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 13:02:16.791583  663506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 13:02:17.841282  663506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.049653037s)
	I0210 13:02:17.841351  663506 main.go:141] libmachine: Making call to close driver server
	I0210 13:02:17.841357  663506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.089508039s)
	I0210 13:02:17.841399  663506 main.go:141] libmachine: Making call to close driver server
	I0210 13:02:17.841411  663506 main.go:141] libmachine: (test-preload-860024) Calling .Close
	I0210 13:02:17.841365  663506 main.go:141] libmachine: (test-preload-860024) Calling .Close
	I0210 13:02:17.841697  663506 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:02:17.841714  663506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:02:17.841723  663506 main.go:141] libmachine: Making call to close driver server
	I0210 13:02:17.841726  663506 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:02:17.841738  663506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:02:17.841761  663506 main.go:141] libmachine: Making call to close driver server
	I0210 13:02:17.841780  663506 main.go:141] libmachine: (test-preload-860024) Calling .Close
	I0210 13:02:17.841730  663506 main.go:141] libmachine: (test-preload-860024) Calling .Close
	I0210 13:02:17.842009  663506 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:02:17.842040  663506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:02:17.842049  663506 main.go:141] libmachine: (test-preload-860024) DBG | Closing plugin on server side
	I0210 13:02:17.842128  663506 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:02:17.842146  663506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:02:17.842071  663506 main.go:141] libmachine: (test-preload-860024) DBG | Closing plugin on server side
	I0210 13:02:17.848391  663506 main.go:141] libmachine: Making call to close driver server
	I0210 13:02:17.848407  663506 main.go:141] libmachine: (test-preload-860024) Calling .Close
	I0210 13:02:17.848643  663506 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:02:17.848658  663506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:02:17.851142  663506 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0210 13:02:17.852326  663506 addons.go:514] duration metric: took 1.464288733s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0210 13:02:18.622963  663506 node_ready.go:53] node "test-preload-860024" has status "Ready":"False"
	I0210 13:02:19.123281  663506 node_ready.go:49] node "test-preload-860024" has status "Ready":"True"
	I0210 13:02:19.123310  663506 node_ready.go:38] duration metric: took 2.504293805s for node "test-preload-860024" to be "Ready" ...
	I0210 13:02:19.123323  663506 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 13:02:19.126911  663506 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-xv89c" in "kube-system" namespace to be "Ready" ...
	I0210 13:02:19.131236  663506 pod_ready.go:93] pod "coredns-6d4b75cb6d-xv89c" in "kube-system" namespace has status "Ready":"True"
	I0210 13:02:19.131260  663506 pod_ready.go:82] duration metric: took 4.318829ms for pod "coredns-6d4b75cb6d-xv89c" in "kube-system" namespace to be "Ready" ...
	I0210 13:02:19.131272  663506 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-860024" in "kube-system" namespace to be "Ready" ...
	I0210 13:02:19.135730  663506 pod_ready.go:93] pod "etcd-test-preload-860024" in "kube-system" namespace has status "Ready":"True"
	I0210 13:02:19.135752  663506 pod_ready.go:82] duration metric: took 4.471266ms for pod "etcd-test-preload-860024" in "kube-system" namespace to be "Ready" ...
	I0210 13:02:19.135763  663506 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-860024" in "kube-system" namespace to be "Ready" ...
	I0210 13:02:19.139566  663506 pod_ready.go:93] pod "kube-apiserver-test-preload-860024" in "kube-system" namespace has status "Ready":"True"
	I0210 13:02:19.139584  663506 pod_ready.go:82] duration metric: took 3.814363ms for pod "kube-apiserver-test-preload-860024" in "kube-system" namespace to be "Ready" ...
	I0210 13:02:19.139593  663506 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-860024" in "kube-system" namespace to be "Ready" ...
	I0210 13:02:19.761717  663506 pod_ready.go:93] pod "kube-controller-manager-test-preload-860024" in "kube-system" namespace has status "Ready":"True"
	I0210 13:02:19.761746  663506 pod_ready.go:82] duration metric: took 622.146623ms for pod "kube-controller-manager-test-preload-860024" in "kube-system" namespace to be "Ready" ...
	I0210 13:02:19.761757  663506 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4276s" in "kube-system" namespace to be "Ready" ...
	I0210 13:02:20.160801  663506 pod_ready.go:93] pod "kube-proxy-4276s" in "kube-system" namespace has status "Ready":"True"
	I0210 13:02:20.160840  663506 pod_ready.go:82] duration metric: took 399.075227ms for pod "kube-proxy-4276s" in "kube-system" namespace to be "Ready" ...
	I0210 13:02:20.160856  663506 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-860024" in "kube-system" namespace to be "Ready" ...
	I0210 13:02:22.166056  663506 pod_ready.go:103] pod "kube-scheduler-test-preload-860024" in "kube-system" namespace has status "Ready":"False"
	I0210 13:02:23.166220  663506 pod_ready.go:93] pod "kube-scheduler-test-preload-860024" in "kube-system" namespace has status "Ready":"True"
	I0210 13:02:23.166248  663506 pod_ready.go:82] duration metric: took 3.005383641s for pod "kube-scheduler-test-preload-860024" in "kube-system" namespace to be "Ready" ...
	I0210 13:02:23.166260  663506 pod_ready.go:39] duration metric: took 4.042925354s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 13:02:23.166278  663506 api_server.go:52] waiting for apiserver process to appear ...
	I0210 13:02:23.166345  663506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:02:23.180051  663506 api_server.go:72] duration metric: took 6.792045449s to wait for apiserver process to appear ...
	I0210 13:02:23.180082  663506 api_server.go:88] waiting for apiserver healthz status ...
	I0210 13:02:23.180099  663506 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8443/healthz ...
	I0210 13:02:23.185230  663506 api_server.go:279] https://192.168.39.223:8443/healthz returned 200:
	ok
	I0210 13:02:23.186220  663506 api_server.go:141] control plane version: v1.24.4
	I0210 13:02:23.186246  663506 api_server.go:131] duration metric: took 6.155545ms to wait for apiserver health ...
	I0210 13:02:23.186258  663506 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 13:02:23.189590  663506 system_pods.go:59] 7 kube-system pods found
	I0210 13:02:23.189620  663506 system_pods.go:61] "coredns-6d4b75cb6d-xv89c" [48ee27af-325d-48fa-96b6-cc5fa9328eba] Running
	I0210 13:02:23.189628  663506 system_pods.go:61] "etcd-test-preload-860024" [8bb885d9-e054-4620-b32d-47b59a2eb549] Running
	I0210 13:02:23.189633  663506 system_pods.go:61] "kube-apiserver-test-preload-860024" [9db9c4c6-0673-4870-8bdd-3ca4a455a03d] Running
	I0210 13:02:23.189640  663506 system_pods.go:61] "kube-controller-manager-test-preload-860024" [b16fb946-460a-43e7-aa56-26eda359eacd] Running
	I0210 13:02:23.189645  663506 system_pods.go:61] "kube-proxy-4276s" [fc531182-23ca-4c28-88cb-1860b31cb5f1] Running
	I0210 13:02:23.189651  663506 system_pods.go:61] "kube-scheduler-test-preload-860024" [2b6a21ee-4572-4d25-b520-b8d8487d886d] Running
	I0210 13:02:23.189657  663506 system_pods.go:61] "storage-provisioner" [494ddd3b-05d4-40ea-b4e3-53a6af6ae94d] Running
	I0210 13:02:23.189664  663506 system_pods.go:74] duration metric: took 3.399173ms to wait for pod list to return data ...
	I0210 13:02:23.189676  663506 default_sa.go:34] waiting for default service account to be created ...
	I0210 13:02:23.360509  663506 default_sa.go:45] found service account: "default"
	I0210 13:02:23.360542  663506 default_sa.go:55] duration metric: took 170.855289ms for default service account to be created ...
	I0210 13:02:23.360552  663506 system_pods.go:116] waiting for k8s-apps to be running ...
	I0210 13:02:23.561812  663506 system_pods.go:86] 7 kube-system pods found
	I0210 13:02:23.561848  663506 system_pods.go:89] "coredns-6d4b75cb6d-xv89c" [48ee27af-325d-48fa-96b6-cc5fa9328eba] Running
	I0210 13:02:23.561854  663506 system_pods.go:89] "etcd-test-preload-860024" [8bb885d9-e054-4620-b32d-47b59a2eb549] Running
	I0210 13:02:23.561862  663506 system_pods.go:89] "kube-apiserver-test-preload-860024" [9db9c4c6-0673-4870-8bdd-3ca4a455a03d] Running
	I0210 13:02:23.561866  663506 system_pods.go:89] "kube-controller-manager-test-preload-860024" [b16fb946-460a-43e7-aa56-26eda359eacd] Running
	I0210 13:02:23.561869  663506 system_pods.go:89] "kube-proxy-4276s" [fc531182-23ca-4c28-88cb-1860b31cb5f1] Running
	I0210 13:02:23.561872  663506 system_pods.go:89] "kube-scheduler-test-preload-860024" [2b6a21ee-4572-4d25-b520-b8d8487d886d] Running
	I0210 13:02:23.561875  663506 system_pods.go:89] "storage-provisioner" [494ddd3b-05d4-40ea-b4e3-53a6af6ae94d] Running
	I0210 13:02:23.561883  663506 system_pods.go:126] duration metric: took 201.323764ms to wait for k8s-apps to be running ...
	I0210 13:02:23.561890  663506 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 13:02:23.561939  663506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:02:23.577506  663506 system_svc.go:56] duration metric: took 15.599638ms WaitForService to wait for kubelet
	I0210 13:02:23.577536  663506 kubeadm.go:582] duration metric: took 7.189536223s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 13:02:23.577559  663506 node_conditions.go:102] verifying NodePressure condition ...
	I0210 13:02:23.761619  663506 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 13:02:23.761649  663506 node_conditions.go:123] node cpu capacity is 2
	I0210 13:02:23.761660  663506 node_conditions.go:105] duration metric: took 184.09453ms to run NodePressure ...
	I0210 13:02:23.761672  663506 start.go:241] waiting for startup goroutines ...
	I0210 13:02:23.761679  663506 start.go:246] waiting for cluster config update ...
	I0210 13:02:23.761695  663506 start.go:255] writing updated cluster config ...
	I0210 13:02:23.761962  663506 ssh_runner.go:195] Run: rm -f paused
	I0210 13:02:23.810411  663506 start.go:600] kubectl: 1.32.1, cluster: 1.24.4 (minor skew: 8)
	I0210 13:02:23.812482  663506 out.go:201] 
	W0210 13:02:23.813998  663506 out.go:270] ! /usr/local/bin/kubectl is version 1.32.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0210 13:02:23.815262  663506 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0210 13:02:23.816491  663506 out.go:177] * Done! kubectl is now configured to use "test-preload-860024" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.675059734Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739192544675040120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4bb02c11-c9f5-43ba-9533-9e34cba60870 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.675603024Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6278ce1-82d8-4986-9895-61363a34013d name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.675657948Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6278ce1-82d8-4986-9895-61363a34013d name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.675836557Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88bf4f0c4c9289a58fbaf9907564d5617819d2f3cb29b296a39f260aec5e1412,PodSandboxId:54afdc0bb37a095fc07f26f3e7174c7eedd5d35ea92b80ec8fe083913d555d85,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1739192536650705654,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xv89c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ee27af-325d-48fa-96b6-cc5fa9328eba,},Annotations:map[string]string{io.kubernetes.container.hash: fe8e8fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7900dcd281c4e497c70dccb7285de3c3098d82276e2bc594f339618241a000,PodSandboxId:26b4eb4c4c1b74451f128a5841b0ea990f01a2b25d3a14742fb0883f6d72ead2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739192529465500999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 494ddd3b-05d4-40ea-b4e3-53a6af6ae94d,},Annotations:map[string]string{io.kubernetes.container.hash: d086e856,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc40c3d75c3c3be8c95751f628b35bc253b7394325a408b3da09a173fde95ef7,PodSandboxId:0571310fcc37a52eac0cfcfba0077eb594aa8dca27e0f4e67e2d079192a65250,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1739192529108838270,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4276s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc
531182-23ca-4c28-88cb-1860b31cb5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0cf8ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b592521283887291be364ec8e5078ad42b01431badd4d8857d9f80d84c3c7df6,PodSandboxId:605e7b537ea85a3cc64aac38ae5db5ee0d6926ffba74d0b37695ffebfbb7b0aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1739192524183232258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-860024,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 25628de528b4b09f4e190df783a783bb,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f463a652c588a22cfe1ac6c5adfdd9ba41c883b8e873e0a0629291a7cb31273d,PodSandboxId:8d60135cd633de8a8bd5e6b77a10a2c135bf08b9d6647dbbcd67cb9edf2ec880,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1739192524145606982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-860024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 10940201585bf6176c830f9f48286ead,},Annotations:map[string]string{io.kubernetes.container.hash: 127fe33f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d70940fb2d1d7a417fe468193cdeedcf478321f1cf1456082b5d9c137fa7f48,PodSandboxId:a7e8b6ea040343b3a432d6d0635de03cb992fe7b7d8c4b546f541e253408d4d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1739192524117070222,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-860024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8657
dd8b289e959dff0982d9086eacf1,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ba80deefcafc3525fc8a991584126668f4ad37a08329d184b5e4fa540cd22b,PodSandboxId:d2f669c7dcb5b20716aec85a4e5acef3d41639fd2c0b826d6a44a8a71aaf6ef0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1739192524070965045,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-860024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c20dcdfd2a4e6d847a84f3ac198d6b4,},Annotation
s:map[string]string{io.kubernetes.container.hash: 79a878d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6278ce1-82d8-4986-9895-61363a34013d name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.712261325Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=30b00245-decb-46dc-8a44-b71bd886775e name=/runtime.v1.RuntimeService/Version
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.712335407Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=30b00245-decb-46dc-8a44-b71bd886775e name=/runtime.v1.RuntimeService/Version
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.713739741Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb1895f7-fbc5-4009-a1ee-bbb324a5079b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.714196606Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739192544714172083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb1895f7-fbc5-4009-a1ee-bbb324a5079b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.714837749Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87e294d9-2d37-4d49-8b2f-de966350a260 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.714891562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87e294d9-2d37-4d49-8b2f-de966350a260 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.715060811Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88bf4f0c4c9289a58fbaf9907564d5617819d2f3cb29b296a39f260aec5e1412,PodSandboxId:54afdc0bb37a095fc07f26f3e7174c7eedd5d35ea92b80ec8fe083913d555d85,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1739192536650705654,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xv89c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ee27af-325d-48fa-96b6-cc5fa9328eba,},Annotations:map[string]string{io.kubernetes.container.hash: fe8e8fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7900dcd281c4e497c70dccb7285de3c3098d82276e2bc594f339618241a000,PodSandboxId:26b4eb4c4c1b74451f128a5841b0ea990f01a2b25d3a14742fb0883f6d72ead2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739192529465500999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 494ddd3b-05d4-40ea-b4e3-53a6af6ae94d,},Annotations:map[string]string{io.kubernetes.container.hash: d086e856,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc40c3d75c3c3be8c95751f628b35bc253b7394325a408b3da09a173fde95ef7,PodSandboxId:0571310fcc37a52eac0cfcfba0077eb594aa8dca27e0f4e67e2d079192a65250,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1739192529108838270,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4276s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc
531182-23ca-4c28-88cb-1860b31cb5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0cf8ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b592521283887291be364ec8e5078ad42b01431badd4d8857d9f80d84c3c7df6,PodSandboxId:605e7b537ea85a3cc64aac38ae5db5ee0d6926ffba74d0b37695ffebfbb7b0aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1739192524183232258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-860024,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 25628de528b4b09f4e190df783a783bb,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f463a652c588a22cfe1ac6c5adfdd9ba41c883b8e873e0a0629291a7cb31273d,PodSandboxId:8d60135cd633de8a8bd5e6b77a10a2c135bf08b9d6647dbbcd67cb9edf2ec880,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1739192524145606982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-860024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 10940201585bf6176c830f9f48286ead,},Annotations:map[string]string{io.kubernetes.container.hash: 127fe33f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d70940fb2d1d7a417fe468193cdeedcf478321f1cf1456082b5d9c137fa7f48,PodSandboxId:a7e8b6ea040343b3a432d6d0635de03cb992fe7b7d8c4b546f541e253408d4d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1739192524117070222,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-860024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8657
dd8b289e959dff0982d9086eacf1,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ba80deefcafc3525fc8a991584126668f4ad37a08329d184b5e4fa540cd22b,PodSandboxId:d2f669c7dcb5b20716aec85a4e5acef3d41639fd2c0b826d6a44a8a71aaf6ef0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1739192524070965045,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-860024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c20dcdfd2a4e6d847a84f3ac198d6b4,},Annotation
s:map[string]string{io.kubernetes.container.hash: 79a878d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87e294d9-2d37-4d49-8b2f-de966350a260 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.748135862Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ccce9671-b40a-4c96-98a5-c512f3b07a2c name=/runtime.v1.RuntimeService/Version
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.748207158Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ccce9671-b40a-4c96-98a5-c512f3b07a2c name=/runtime.v1.RuntimeService/Version
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.749415517Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d131b7e9-f616-4d04-b610-e73be5d982c8 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.749890564Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739192544749868273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d131b7e9-f616-4d04-b610-e73be5d982c8 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.750298403Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=457c29bb-38b9-4862-b944-5c539d5087b5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.750345846Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=457c29bb-38b9-4862-b944-5c539d5087b5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.750515977Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88bf4f0c4c9289a58fbaf9907564d5617819d2f3cb29b296a39f260aec5e1412,PodSandboxId:54afdc0bb37a095fc07f26f3e7174c7eedd5d35ea92b80ec8fe083913d555d85,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1739192536650705654,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xv89c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ee27af-325d-48fa-96b6-cc5fa9328eba,},Annotations:map[string]string{io.kubernetes.container.hash: fe8e8fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7900dcd281c4e497c70dccb7285de3c3098d82276e2bc594f339618241a000,PodSandboxId:26b4eb4c4c1b74451f128a5841b0ea990f01a2b25d3a14742fb0883f6d72ead2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739192529465500999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 494ddd3b-05d4-40ea-b4e3-53a6af6ae94d,},Annotations:map[string]string{io.kubernetes.container.hash: d086e856,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc40c3d75c3c3be8c95751f628b35bc253b7394325a408b3da09a173fde95ef7,PodSandboxId:0571310fcc37a52eac0cfcfba0077eb594aa8dca27e0f4e67e2d079192a65250,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1739192529108838270,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4276s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc
531182-23ca-4c28-88cb-1860b31cb5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0cf8ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b592521283887291be364ec8e5078ad42b01431badd4d8857d9f80d84c3c7df6,PodSandboxId:605e7b537ea85a3cc64aac38ae5db5ee0d6926ffba74d0b37695ffebfbb7b0aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1739192524183232258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-860024,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 25628de528b4b09f4e190df783a783bb,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f463a652c588a22cfe1ac6c5adfdd9ba41c883b8e873e0a0629291a7cb31273d,PodSandboxId:8d60135cd633de8a8bd5e6b77a10a2c135bf08b9d6647dbbcd67cb9edf2ec880,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1739192524145606982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-860024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 10940201585bf6176c830f9f48286ead,},Annotations:map[string]string{io.kubernetes.container.hash: 127fe33f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d70940fb2d1d7a417fe468193cdeedcf478321f1cf1456082b5d9c137fa7f48,PodSandboxId:a7e8b6ea040343b3a432d6d0635de03cb992fe7b7d8c4b546f541e253408d4d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1739192524117070222,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-860024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8657
dd8b289e959dff0982d9086eacf1,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ba80deefcafc3525fc8a991584126668f4ad37a08329d184b5e4fa540cd22b,PodSandboxId:d2f669c7dcb5b20716aec85a4e5acef3d41639fd2c0b826d6a44a8a71aaf6ef0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1739192524070965045,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-860024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c20dcdfd2a4e6d847a84f3ac198d6b4,},Annotation
s:map[string]string{io.kubernetes.container.hash: 79a878d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=457c29bb-38b9-4862-b944-5c539d5087b5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.780659363Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce683bac-4374-4643-a567-819729b735ff name=/runtime.v1.RuntimeService/Version
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.780742167Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce683bac-4374-4643-a567-819729b735ff name=/runtime.v1.RuntimeService/Version
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.781692932Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8120b02a-8bbc-4c79-a77d-9c5786995054 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.782114773Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739192544782092857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8120b02a-8bbc-4c79-a77d-9c5786995054 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.782589763Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7bc044d3-1beb-46c7-9139-7388dc95acf6 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.782704934Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7bc044d3-1beb-46c7-9139-7388dc95acf6 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:02:24 test-preload-860024 crio[671]: time="2025-02-10 13:02:24.782907954Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88bf4f0c4c9289a58fbaf9907564d5617819d2f3cb29b296a39f260aec5e1412,PodSandboxId:54afdc0bb37a095fc07f26f3e7174c7eedd5d35ea92b80ec8fe083913d555d85,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1739192536650705654,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xv89c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ee27af-325d-48fa-96b6-cc5fa9328eba,},Annotations:map[string]string{io.kubernetes.container.hash: fe8e8fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7900dcd281c4e497c70dccb7285de3c3098d82276e2bc594f339618241a000,PodSandboxId:26b4eb4c4c1b74451f128a5841b0ea990f01a2b25d3a14742fb0883f6d72ead2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739192529465500999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 494ddd3b-05d4-40ea-b4e3-53a6af6ae94d,},Annotations:map[string]string{io.kubernetes.container.hash: d086e856,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc40c3d75c3c3be8c95751f628b35bc253b7394325a408b3da09a173fde95ef7,PodSandboxId:0571310fcc37a52eac0cfcfba0077eb594aa8dca27e0f4e67e2d079192a65250,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1739192529108838270,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4276s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc
531182-23ca-4c28-88cb-1860b31cb5f1,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0cf8ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b592521283887291be364ec8e5078ad42b01431badd4d8857d9f80d84c3c7df6,PodSandboxId:605e7b537ea85a3cc64aac38ae5db5ee0d6926ffba74d0b37695ffebfbb7b0aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1739192524183232258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-860024,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 25628de528b4b09f4e190df783a783bb,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f463a652c588a22cfe1ac6c5adfdd9ba41c883b8e873e0a0629291a7cb31273d,PodSandboxId:8d60135cd633de8a8bd5e6b77a10a2c135bf08b9d6647dbbcd67cb9edf2ec880,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1739192524145606982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-860024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 10940201585bf6176c830f9f48286ead,},Annotations:map[string]string{io.kubernetes.container.hash: 127fe33f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d70940fb2d1d7a417fe468193cdeedcf478321f1cf1456082b5d9c137fa7f48,PodSandboxId:a7e8b6ea040343b3a432d6d0635de03cb992fe7b7d8c4b546f541e253408d4d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1739192524117070222,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-860024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8657
dd8b289e959dff0982d9086eacf1,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ba80deefcafc3525fc8a991584126668f4ad37a08329d184b5e4fa540cd22b,PodSandboxId:d2f669c7dcb5b20716aec85a4e5acef3d41639fd2c0b826d6a44a8a71aaf6ef0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1739192524070965045,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-860024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c20dcdfd2a4e6d847a84f3ac198d6b4,},Annotation
s:map[string]string{io.kubernetes.container.hash: 79a878d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7bc044d3-1beb-46c7-9139-7388dc95acf6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	88bf4f0c4c928       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   54afdc0bb37a0       coredns-6d4b75cb6d-xv89c
	ff7900dcd281c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       1                   26b4eb4c4c1b7       storage-provisioner
	cc40c3d75c3c3       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   0571310fcc37a       kube-proxy-4276s
	b592521283887       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   605e7b537ea85       kube-controller-manager-test-preload-860024
	f463a652c588a       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   8d60135cd633d       kube-apiserver-test-preload-860024
	7d70940fb2d1d       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   a7e8b6ea04034       kube-scheduler-test-preload-860024
	19ba80deefcaf       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   d2f669c7dcb5b       etcd-test-preload-860024
	
	
	==> coredns [88bf4f0c4c9289a58fbaf9907564d5617819d2f3cb29b296a39f260aec5e1412] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:48430 - 39349 "HINFO IN 727313190285500201.5094068570668695718. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.022826547s
	
	
	==> describe nodes <==
	Name:               test-preload-860024
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-860024
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef65fd9d75393231710a2bc61f2cab58e3e6ecb2
	                    minikube.k8s.io/name=test-preload-860024
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_10T12_58_55_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 12:58:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-860024
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 13:02:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 13:02:18 +0000   Mon, 10 Feb 2025 12:58:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 13:02:18 +0000   Mon, 10 Feb 2025 12:58:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 13:02:18 +0000   Mon, 10 Feb 2025 12:58:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 13:02:18 +0000   Mon, 10 Feb 2025 13:02:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.223
	  Hostname:    test-preload-860024
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 31e05a724fae4d2a86233e1d41ee1fb6
	  System UUID:                31e05a72-4fae-4d2a-8623-3e1d41ee1fb6
	  Boot ID:                    7cc65a2c-85e2-48d9-bd92-754d8dc28a0c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-xv89c                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     3m16s
	  kube-system                 etcd-test-preload-860024                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m29s
	  kube-system                 kube-apiserver-test-preload-860024             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 kube-controller-manager-test-preload-860024    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m30s
	  kube-system                 kube-proxy-4276s                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	  kube-system                 kube-scheduler-test-preload-860024             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15s                    kube-proxy       
	  Normal  Starting                 3m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m36s (x5 over 3m37s)  kubelet          Node test-preload-860024 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m36s (x5 over 3m37s)  kubelet          Node test-preload-860024 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m36s (x4 over 3m37s)  kubelet          Node test-preload-860024 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m29s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m29s                  kubelet          Node test-preload-860024 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m29s                  kubelet          Node test-preload-860024 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m29s                  kubelet          Node test-preload-860024 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m19s                  kubelet          Node test-preload-860024 status is now: NodeReady
	  Normal  RegisteredNode           3m17s                  node-controller  Node test-preload-860024 event: Registered Node test-preload-860024 in Controller
	  Normal  Starting                 21s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)      kubelet          Node test-preload-860024 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)      kubelet          Node test-preload-860024 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)      kubelet          Node test-preload-860024 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                     node-controller  Node test-preload-860024 event: Registered Node test-preload-860024 in Controller
	
	
	==> dmesg <==
	[Feb10 13:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052293] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039389] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.859258] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.999972] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.537138] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.685291] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.056988] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056063] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.161840] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.139006] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.266001] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[Feb10 13:02] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.055838] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.780707] systemd-fstab-generator[1118]: Ignoring "noauto" option for root device
	[  +5.326440] kauditd_printk_skb: 105 callbacks suppressed
	[  +7.803967] kauditd_printk_skb: 31 callbacks suppressed
	[  +0.136287] systemd-fstab-generator[1821]: Ignoring "noauto" option for root device
	
	
	==> etcd [19ba80deefcafc3525fc8a991584126668f4ad37a08329d184b5e4fa540cd22b] <==
	{"level":"info","ts":"2025-02-10T13:02:04.518Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"dce4f6de3abdb6bd","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-02-10T13:02:04.528Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-02-10T13:02:04.530Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-10T13:02:04.530Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"dce4f6de3abdb6bd","initial-advertise-peer-urls":["https://192.168.39.223:2380"],"listen-peer-urls":["https://192.168.39.223:2380"],"advertise-client-urls":["https://192.168.39.223:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.223:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-02-10T13:02:04.530Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-10T13:02:04.530Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.223:2380"}
	{"level":"info","ts":"2025-02-10T13:02:04.530Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.223:2380"}
	{"level":"info","ts":"2025-02-10T13:02:04.531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dce4f6de3abdb6bd switched to configuration voters=(15917118417362859709)"}
	{"level":"info","ts":"2025-02-10T13:02:04.531Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4eb1782ea0e4b224","local-member-id":"dce4f6de3abdb6bd","added-peer-id":"dce4f6de3abdb6bd","added-peer-peer-urls":["https://192.168.39.223:2380"]}
	{"level":"info","ts":"2025-02-10T13:02:04.531Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4eb1782ea0e4b224","local-member-id":"dce4f6de3abdb6bd","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T13:02:04.531Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T13:02:05.889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dce4f6de3abdb6bd is starting a new election at term 2"}
	{"level":"info","ts":"2025-02-10T13:02:05.889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dce4f6de3abdb6bd became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-10T13:02:05.889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dce4f6de3abdb6bd received MsgPreVoteResp from dce4f6de3abdb6bd at term 2"}
	{"level":"info","ts":"2025-02-10T13:02:05.889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dce4f6de3abdb6bd became candidate at term 3"}
	{"level":"info","ts":"2025-02-10T13:02:05.889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dce4f6de3abdb6bd received MsgVoteResp from dce4f6de3abdb6bd at term 3"}
	{"level":"info","ts":"2025-02-10T13:02:05.889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dce4f6de3abdb6bd became leader at term 3"}
	{"level":"info","ts":"2025-02-10T13:02:05.889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dce4f6de3abdb6bd elected leader dce4f6de3abdb6bd at term 3"}
	{"level":"info","ts":"2025-02-10T13:02:05.889Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"dce4f6de3abdb6bd","local-member-attributes":"{Name:test-preload-860024 ClientURLs:[https://192.168.39.223:2379]}","request-path":"/0/members/dce4f6de3abdb6bd/attributes","cluster-id":"4eb1782ea0e4b224","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-10T13:02:05.890Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T13:02:05.890Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T13:02:05.892Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-10T13:02:05.892Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-10T13:02:05.892Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.223:2379"}
	{"level":"info","ts":"2025-02-10T13:02:05.893Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 13:02:25 up 0 min,  0 users,  load average: 0.44, 0.15, 0.05
	Linux test-preload-860024 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f463a652c588a22cfe1ac6c5adfdd9ba41c883b8e873e0a0629291a7cb31273d] <==
	I0210 13:02:08.171317       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0210 13:02:08.171359       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0210 13:02:08.191956       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0210 13:02:08.192005       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0210 13:02:08.197503       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 13:02:08.212742       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0210 13:02:08.272426       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0210 13:02:08.286880       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0210 13:02:08.292728       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0210 13:02:08.357253       1 cache.go:39] Caches are synced for autoregister controller
	I0210 13:02:08.360741       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0210 13:02:08.360867       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0210 13:02:08.361140       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0210 13:02:08.361357       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0210 13:02:08.380871       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0210 13:02:08.868036       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0210 13:02:09.167004       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0210 13:02:09.600323       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0210 13:02:10.090128       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0210 13:02:10.096517       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0210 13:02:10.124659       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0210 13:02:10.140274       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0210 13:02:10.145972       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0210 13:02:20.704011       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0210 13:02:20.896311       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [b592521283887291be364ec8e5078ad42b01431badd4d8857d9f80d84c3c7df6] <==
	I0210 13:02:20.677700       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0210 13:02:20.680743       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0210 13:02:20.692503       1 shared_informer.go:262] Caches are synced for PV protection
	I0210 13:02:20.694903       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0210 13:02:20.705511       1 shared_informer.go:262] Caches are synced for node
	I0210 13:02:20.705621       1 range_allocator.go:173] Starting range CIDR allocator
	I0210 13:02:20.705630       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0210 13:02:20.705641       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0210 13:02:20.708520       1 shared_informer.go:262] Caches are synced for ephemeral
	I0210 13:02:20.710850       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0210 13:02:20.715274       1 shared_informer.go:262] Caches are synced for attach detach
	I0210 13:02:20.716539       1 shared_informer.go:262] Caches are synced for GC
	I0210 13:02:20.723053       1 shared_informer.go:262] Caches are synced for stateful set
	I0210 13:02:20.732989       1 shared_informer.go:262] Caches are synced for endpoint
	I0210 13:02:20.843792       1 shared_informer.go:262] Caches are synced for job
	I0210 13:02:20.852535       1 shared_informer.go:262] Caches are synced for disruption
	I0210 13:02:20.852650       1 disruption.go:371] Sending events to api server.
	I0210 13:02:20.865197       1 shared_informer.go:262] Caches are synced for resource quota
	I0210 13:02:20.868724       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0210 13:02:20.872020       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0210 13:02:20.885604       1 shared_informer.go:262] Caches are synced for cronjob
	I0210 13:02:20.898963       1 shared_informer.go:262] Caches are synced for resource quota
	I0210 13:02:21.349363       1 shared_informer.go:262] Caches are synced for garbage collector
	I0210 13:02:21.381941       1 shared_informer.go:262] Caches are synced for garbage collector
	I0210 13:02:21.381983       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [cc40c3d75c3c3be8c95751f628b35bc253b7394325a408b3da09a173fde95ef7] <==
	I0210 13:02:09.542225       1 node.go:163] Successfully retrieved node IP: 192.168.39.223
	I0210 13:02:09.542294       1 server_others.go:138] "Detected node IP" address="192.168.39.223"
	I0210 13:02:09.542374       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0210 13:02:09.592446       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0210 13:02:09.592473       1 server_others.go:206] "Using iptables Proxier"
	I0210 13:02:09.593252       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0210 13:02:09.593931       1 server.go:661] "Version info" version="v1.24.4"
	I0210 13:02:09.593955       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 13:02:09.596287       1 config.go:317] "Starting service config controller"
	I0210 13:02:09.596332       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0210 13:02:09.596355       1 config.go:226] "Starting endpoint slice config controller"
	I0210 13:02:09.596371       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0210 13:02:09.597065       1 config.go:444] "Starting node config controller"
	I0210 13:02:09.597130       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0210 13:02:09.696431       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0210 13:02:09.696600       1 shared_informer.go:262] Caches are synced for service config
	I0210 13:02:09.697364       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [7d70940fb2d1d7a417fe468193cdeedcf478321f1cf1456082b5d9c137fa7f48] <==
	I0210 13:02:05.375881       1 serving.go:348] Generated self-signed cert in-memory
	W0210 13:02:08.214971       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0210 13:02:08.215092       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0210 13:02:08.215126       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0210 13:02:08.215198       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0210 13:02:08.264943       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0210 13:02:08.264972       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 13:02:08.276647       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0210 13:02:08.280305       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 13:02:08.280338       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 13:02:08.280363       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0210 13:02:08.380464       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 10 13:02:08 test-preload-860024 kubelet[1125]: I0210 13:02:08.407099    1125 apiserver.go:52] "Watching apiserver"
	Feb 10 13:02:08 test-preload-860024 kubelet[1125]: I0210 13:02:08.411642    1125 topology_manager.go:200] "Topology Admit Handler"
	Feb 10 13:02:08 test-preload-860024 kubelet[1125]: I0210 13:02:08.411927    1125 topology_manager.go:200] "Topology Admit Handler"
	Feb 10 13:02:08 test-preload-860024 kubelet[1125]: I0210 13:02:08.412065    1125 topology_manager.go:200] "Topology Admit Handler"
	Feb 10 13:02:08 test-preload-860024 kubelet[1125]: E0210 13:02:08.413597    1125 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-xv89c" podUID=48ee27af-325d-48fa-96b6-cc5fa9328eba
	Feb 10 13:02:08 test-preload-860024 kubelet[1125]: E0210 13:02:08.457178    1125 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Feb 10 13:02:08 test-preload-860024 kubelet[1125]: I0210 13:02:08.476253    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2n5v\" (UniqueName: \"kubernetes.io/projected/48ee27af-325d-48fa-96b6-cc5fa9328eba-kube-api-access-j2n5v\") pod \"coredns-6d4b75cb6d-xv89c\" (UID: \"48ee27af-325d-48fa-96b6-cc5fa9328eba\") " pod="kube-system/coredns-6d4b75cb6d-xv89c"
	Feb 10 13:02:08 test-preload-860024 kubelet[1125]: I0210 13:02:08.476333    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48ee27af-325d-48fa-96b6-cc5fa9328eba-config-volume\") pod \"coredns-6d4b75cb6d-xv89c\" (UID: \"48ee27af-325d-48fa-96b6-cc5fa9328eba\") " pod="kube-system/coredns-6d4b75cb6d-xv89c"
	Feb 10 13:02:08 test-preload-860024 kubelet[1125]: I0210 13:02:08.476358    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fc531182-23ca-4c28-88cb-1860b31cb5f1-kube-proxy\") pod \"kube-proxy-4276s\" (UID: \"fc531182-23ca-4c28-88cb-1860b31cb5f1\") " pod="kube-system/kube-proxy-4276s"
	Feb 10 13:02:08 test-preload-860024 kubelet[1125]: I0210 13:02:08.476378    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc531182-23ca-4c28-88cb-1860b31cb5f1-xtables-lock\") pod \"kube-proxy-4276s\" (UID: \"fc531182-23ca-4c28-88cb-1860b31cb5f1\") " pod="kube-system/kube-proxy-4276s"
	Feb 10 13:02:08 test-preload-860024 kubelet[1125]: I0210 13:02:08.476397    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6mfv\" (UniqueName: \"kubernetes.io/projected/fc531182-23ca-4c28-88cb-1860b31cb5f1-kube-api-access-v6mfv\") pod \"kube-proxy-4276s\" (UID: \"fc531182-23ca-4c28-88cb-1860b31cb5f1\") " pod="kube-system/kube-proxy-4276s"
	Feb 10 13:02:08 test-preload-860024 kubelet[1125]: I0210 13:02:08.476417    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/494ddd3b-05d4-40ea-b4e3-53a6af6ae94d-tmp\") pod \"storage-provisioner\" (UID: \"494ddd3b-05d4-40ea-b4e3-53a6af6ae94d\") " pod="kube-system/storage-provisioner"
	Feb 10 13:02:08 test-preload-860024 kubelet[1125]: I0210 13:02:08.476439    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc531182-23ca-4c28-88cb-1860b31cb5f1-lib-modules\") pod \"kube-proxy-4276s\" (UID: \"fc531182-23ca-4c28-88cb-1860b31cb5f1\") " pod="kube-system/kube-proxy-4276s"
	Feb 10 13:02:08 test-preload-860024 kubelet[1125]: I0210 13:02:08.476460    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k9zp\" (UniqueName: \"kubernetes.io/projected/494ddd3b-05d4-40ea-b4e3-53a6af6ae94d-kube-api-access-4k9zp\") pod \"storage-provisioner\" (UID: \"494ddd3b-05d4-40ea-b4e3-53a6af6ae94d\") " pod="kube-system/storage-provisioner"
	Feb 10 13:02:08 test-preload-860024 kubelet[1125]: I0210 13:02:08.476479    1125 reconciler.go:159] "Reconciler: start to sync state"
	Feb 10 13:02:08 test-preload-860024 kubelet[1125]: E0210 13:02:08.579040    1125 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 10 13:02:08 test-preload-860024 kubelet[1125]: E0210 13:02:08.579204    1125 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/48ee27af-325d-48fa-96b6-cc5fa9328eba-config-volume podName:48ee27af-325d-48fa-96b6-cc5fa9328eba nodeName:}" failed. No retries permitted until 2025-02-10 13:02:09.079170604 +0000 UTC m=+5.799740710 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/48ee27af-325d-48fa-96b6-cc5fa9328eba-config-volume") pod "coredns-6d4b75cb6d-xv89c" (UID: "48ee27af-325d-48fa-96b6-cc5fa9328eba") : object "kube-system"/"coredns" not registered
	Feb 10 13:02:09 test-preload-860024 kubelet[1125]: E0210 13:02:09.081160    1125 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 10 13:02:09 test-preload-860024 kubelet[1125]: E0210 13:02:09.081249    1125 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/48ee27af-325d-48fa-96b6-cc5fa9328eba-config-volume podName:48ee27af-325d-48fa-96b6-cc5fa9328eba nodeName:}" failed. No retries permitted until 2025-02-10 13:02:10.081227786 +0000 UTC m=+6.801797888 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/48ee27af-325d-48fa-96b6-cc5fa9328eba-config-volume") pod "coredns-6d4b75cb6d-xv89c" (UID: "48ee27af-325d-48fa-96b6-cc5fa9328eba") : object "kube-system"/"coredns" not registered
	Feb 10 13:02:10 test-preload-860024 kubelet[1125]: E0210 13:02:10.087744    1125 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 10 13:02:10 test-preload-860024 kubelet[1125]: E0210 13:02:10.087806    1125 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/48ee27af-325d-48fa-96b6-cc5fa9328eba-config-volume podName:48ee27af-325d-48fa-96b6-cc5fa9328eba nodeName:}" failed. No retries permitted until 2025-02-10 13:02:12.087791807 +0000 UTC m=+8.808361909 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/48ee27af-325d-48fa-96b6-cc5fa9328eba-config-volume") pod "coredns-6d4b75cb6d-xv89c" (UID: "48ee27af-325d-48fa-96b6-cc5fa9328eba") : object "kube-system"/"coredns" not registered
	Feb 10 13:02:10 test-preload-860024 kubelet[1125]: E0210 13:02:10.506815    1125 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-xv89c" podUID=48ee27af-325d-48fa-96b6-cc5fa9328eba
	Feb 10 13:02:12 test-preload-860024 kubelet[1125]: E0210 13:02:12.103259    1125 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 10 13:02:12 test-preload-860024 kubelet[1125]: E0210 13:02:12.103354    1125 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/48ee27af-325d-48fa-96b6-cc5fa9328eba-config-volume podName:48ee27af-325d-48fa-96b6-cc5fa9328eba nodeName:}" failed. No retries permitted until 2025-02-10 13:02:16.10333931 +0000 UTC m=+12.823909399 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/48ee27af-325d-48fa-96b6-cc5fa9328eba-config-volume") pod "coredns-6d4b75cb6d-xv89c" (UID: "48ee27af-325d-48fa-96b6-cc5fa9328eba") : object "kube-system"/"coredns" not registered
	Feb 10 13:02:12 test-preload-860024 kubelet[1125]: E0210 13:02:12.506690    1125 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-xv89c" podUID=48ee27af-325d-48fa-96b6-cc5fa9328eba
	
	
	==> storage-provisioner [ff7900dcd281c4e497c70dccb7285de3c3098d82276e2bc594f339618241a000] <==
	I0210 13:02:09.570254       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-860024 -n test-preload-860024
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-860024 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-860024" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-860024
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-860024: (1.19814976s)
--- FAIL: TestPreload (281.12s)

                                                
                                    
x
+
TestKubernetesUpgrade (729.16s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-284631 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-284631 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m53.451961248s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-284631] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20383
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-284631" primary control-plane node in "kubernetes-upgrade-284631" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 13:07:40.397701  668142 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:07:40.397826  668142 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:07:40.397835  668142 out.go:358] Setting ErrFile to fd 2...
	I0210 13:07:40.397839  668142 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:07:40.398003  668142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
	I0210 13:07:40.398607  668142 out.go:352] Setting JSON to false
	I0210 13:07:40.399566  668142 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":17410,"bootTime":1739175450,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 13:07:40.399678  668142 start.go:139] virtualization: kvm guest
	I0210 13:07:40.401945  668142 out.go:177] * [kubernetes-upgrade-284631] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 13:07:40.403736  668142 out.go:177]   - MINIKUBE_LOCATION=20383
	I0210 13:07:40.403745  668142 notify.go:220] Checking for updates...
	I0210 13:07:40.406112  668142 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 13:07:40.407436  668142 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:07:40.408811  668142 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	I0210 13:07:40.410075  668142 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 13:07:40.411247  668142 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 13:07:40.412922  668142 config.go:182] Loaded profile config "NoKubernetes-125233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0210 13:07:40.413019  668142 config.go:182] Loaded profile config "cert-expiration-241180": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:07:40.413099  668142 config.go:182] Loaded profile config "running-upgrade-123942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0210 13:07:40.413229  668142 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 13:07:40.455260  668142 out.go:177] * Using the kvm2 driver based on user configuration
	I0210 13:07:40.456505  668142 start.go:297] selected driver: kvm2
	I0210 13:07:40.456527  668142 start.go:901] validating driver "kvm2" against <nil>
	I0210 13:07:40.456541  668142 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 13:07:40.457503  668142 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:07:40.457597  668142 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20383-625153/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 13:07:40.473643  668142 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 13:07:40.473702  668142 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 13:07:40.473938  668142 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0210 13:07:40.473966  668142 cni.go:84] Creating CNI manager for ""
	I0210 13:07:40.474009  668142 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:07:40.474021  668142 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0210 13:07:40.474066  668142 start.go:340] cluster config:
	{Name:kubernetes-upgrade-284631 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-284631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:07:40.474171  668142 iso.go:125] acquiring lock: {Name:mk013d189757e85c0699014f5ef29205d9d4927f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:07:40.475847  668142 out.go:177] * Starting "kubernetes-upgrade-284631" primary control-plane node in "kubernetes-upgrade-284631" cluster
	I0210 13:07:40.477016  668142 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 13:07:40.477064  668142 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0210 13:07:40.477091  668142 cache.go:56] Caching tarball of preloaded images
	I0210 13:07:40.477210  668142 preload.go:172] Found /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 13:07:40.477222  668142 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0210 13:07:40.477312  668142 profile.go:143] Saving config to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/config.json ...
	I0210 13:07:40.477331  668142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/config.json: {Name:mk83991b76f2c0c2195d08b8eb3b991a065a6029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:07:40.477479  668142 start.go:360] acquireMachinesLock for kubernetes-upgrade-284631: {Name:mk28e87da66de739a4c7c70d1fb5afc4ce31a4d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 13:08:02.073896  668142 start.go:364] duration metric: took 21.596384973s to acquireMachinesLock for "kubernetes-upgrade-284631"
	I0210 13:08:02.074012  668142 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-284631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernete
s-upgrade-284631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 13:08:02.074166  668142 start.go:125] createHost starting for "" (driver="kvm2")
	I0210 13:08:02.076380  668142 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0210 13:08:02.076633  668142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:08:02.076714  668142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:08:02.097316  668142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35929
	I0210 13:08:02.097923  668142 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:08:02.098641  668142 main.go:141] libmachine: Using API Version  1
	I0210 13:08:02.098664  668142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:08:02.099072  668142 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:08:02.099274  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetMachineName
	I0210 13:08:02.099419  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .DriverName
	I0210 13:08:02.099583  668142 start.go:159] libmachine.API.Create for "kubernetes-upgrade-284631" (driver="kvm2")
	I0210 13:08:02.099618  668142 client.go:168] LocalClient.Create starting
	I0210 13:08:02.099655  668142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem
	I0210 13:08:02.099698  668142 main.go:141] libmachine: Decoding PEM data...
	I0210 13:08:02.099721  668142 main.go:141] libmachine: Parsing certificate...
	I0210 13:08:02.099804  668142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem
	I0210 13:08:02.099831  668142 main.go:141] libmachine: Decoding PEM data...
	I0210 13:08:02.099846  668142 main.go:141] libmachine: Parsing certificate...
	I0210 13:08:02.099872  668142 main.go:141] libmachine: Running pre-create checks...
	I0210 13:08:02.099884  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .PreCreateCheck
	I0210 13:08:02.100226  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetConfigRaw
	I0210 13:08:02.100684  668142 main.go:141] libmachine: Creating machine...
	I0210 13:08:02.100702  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .Create
	I0210 13:08:02.100881  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) creating KVM machine...
	I0210 13:08:02.100903  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) creating network...
	I0210 13:08:02.102264  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | found existing default KVM network
	I0210 13:08:02.103536  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | I0210 13:08:02.103350  668361 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:92:fd:5a} reservation:<nil>}
	I0210 13:08:02.104777  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | I0210 13:08:02.104669  668361 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000209f60}
	I0210 13:08:02.104802  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | created network xml: 
	I0210 13:08:02.104814  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | <network>
	I0210 13:08:02.104827  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG |   <name>mk-kubernetes-upgrade-284631</name>
	I0210 13:08:02.104838  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG |   <dns enable='no'/>
	I0210 13:08:02.104845  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG |   
	I0210 13:08:02.104855  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0210 13:08:02.104867  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG |     <dhcp>
	I0210 13:08:02.104886  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0210 13:08:02.104893  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG |     </dhcp>
	I0210 13:08:02.104901  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG |   </ip>
	I0210 13:08:02.104908  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG |   
	I0210 13:08:02.104916  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | </network>
	I0210 13:08:02.104922  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | 
	I0210 13:08:02.110639  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | trying to create private KVM network mk-kubernetes-upgrade-284631 192.168.50.0/24...
	I0210 13:08:02.192932  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | private KVM network mk-kubernetes-upgrade-284631 192.168.50.0/24 created
	I0210 13:08:02.192959  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) setting up store path in /home/jenkins/minikube-integration/20383-625153/.minikube/machines/kubernetes-upgrade-284631 ...
	I0210 13:08:02.192974  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | I0210 13:08:02.192919  668361 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20383-625153/.minikube
	I0210 13:08:02.192987  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) building disk image from file:///home/jenkins/minikube-integration/20383-625153/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0210 13:08:02.193126  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Downloading /home/jenkins/minikube-integration/20383-625153/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20383-625153/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0210 13:08:02.513494  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | I0210 13:08:02.513368  668361 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/kubernetes-upgrade-284631/id_rsa...
	I0210 13:08:02.685403  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | I0210 13:08:02.685208  668361 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/kubernetes-upgrade-284631/kubernetes-upgrade-284631.rawdisk...
	I0210 13:08:02.685441  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | Writing magic tar header
	I0210 13:08:02.685462  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | Writing SSH key tar header
	I0210 13:08:02.685476  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | I0210 13:08:02.685331  668361 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20383-625153/.minikube/machines/kubernetes-upgrade-284631 ...
	I0210 13:08:02.685500  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/kubernetes-upgrade-284631
	I0210 13:08:02.685517  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) setting executable bit set on /home/jenkins/minikube-integration/20383-625153/.minikube/machines/kubernetes-upgrade-284631 (perms=drwx------)
	I0210 13:08:02.685531  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20383-625153/.minikube/machines
	I0210 13:08:02.685548  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20383-625153/.minikube
	I0210 13:08:02.685570  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20383-625153
	I0210 13:08:02.685585  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) setting executable bit set on /home/jenkins/minikube-integration/20383-625153/.minikube/machines (perms=drwxr-xr-x)
	I0210 13:08:02.685600  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) setting executable bit set on /home/jenkins/minikube-integration/20383-625153/.minikube (perms=drwxr-xr-x)
	I0210 13:08:02.685610  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) setting executable bit set on /home/jenkins/minikube-integration/20383-625153 (perms=drwxrwxr-x)
	I0210 13:08:02.685625  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0210 13:08:02.685634  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0210 13:08:02.685644  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0210 13:08:02.685659  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | checking permissions on dir: /home/jenkins
	I0210 13:08:02.685669  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | checking permissions on dir: /home
	I0210 13:08:02.685680  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | skipping /home - not owner
	I0210 13:08:02.685692  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) creating domain...
	I0210 13:08:02.686681  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) define libvirt domain using xml: 
	I0210 13:08:02.686705  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) <domain type='kvm'>
	I0210 13:08:02.686716  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)   <name>kubernetes-upgrade-284631</name>
	I0210 13:08:02.686724  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)   <memory unit='MiB'>2200</memory>
	I0210 13:08:02.686733  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)   <vcpu>2</vcpu>
	I0210 13:08:02.686743  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)   <features>
	I0210 13:08:02.686752  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     <acpi/>
	I0210 13:08:02.686771  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     <apic/>
	I0210 13:08:02.686782  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     <pae/>
	I0210 13:08:02.686792  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     
	I0210 13:08:02.686834  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)   </features>
	I0210 13:08:02.686880  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)   <cpu mode='host-passthrough'>
	I0210 13:08:02.686895  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)   
	I0210 13:08:02.686904  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)   </cpu>
	I0210 13:08:02.686913  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)   <os>
	I0210 13:08:02.686923  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     <type>hvm</type>
	I0210 13:08:02.686947  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     <boot dev='cdrom'/>
	I0210 13:08:02.686970  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     <boot dev='hd'/>
	I0210 13:08:02.686981  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     <bootmenu enable='no'/>
	I0210 13:08:02.686989  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)   </os>
	I0210 13:08:02.686995  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)   <devices>
	I0210 13:08:02.687003  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     <disk type='file' device='cdrom'>
	I0210 13:08:02.687013  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)       <source file='/home/jenkins/minikube-integration/20383-625153/.minikube/machines/kubernetes-upgrade-284631/boot2docker.iso'/>
	I0210 13:08:02.687021  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)       <target dev='hdc' bus='scsi'/>
	I0210 13:08:02.687027  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)       <readonly/>
	I0210 13:08:02.687042  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     </disk>
	I0210 13:08:02.687050  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     <disk type='file' device='disk'>
	I0210 13:08:02.687059  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0210 13:08:02.687122  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)       <source file='/home/jenkins/minikube-integration/20383-625153/.minikube/machines/kubernetes-upgrade-284631/kubernetes-upgrade-284631.rawdisk'/>
	I0210 13:08:02.687144  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)       <target dev='hda' bus='virtio'/>
	I0210 13:08:02.687155  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     </disk>
	I0210 13:08:02.687166  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     <interface type='network'>
	I0210 13:08:02.687177  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)       <source network='mk-kubernetes-upgrade-284631'/>
	I0210 13:08:02.687196  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)       <model type='virtio'/>
	I0210 13:08:02.687215  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     </interface>
	I0210 13:08:02.687225  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     <interface type='network'>
	I0210 13:08:02.687234  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)       <source network='default'/>
	I0210 13:08:02.687248  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)       <model type='virtio'/>
	I0210 13:08:02.687256  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     </interface>
	I0210 13:08:02.687267  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     <serial type='pty'>
	I0210 13:08:02.687276  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)       <target port='0'/>
	I0210 13:08:02.687285  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     </serial>
	I0210 13:08:02.687294  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     <console type='pty'>
	I0210 13:08:02.687305  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)       <target type='serial' port='0'/>
	I0210 13:08:02.687321  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     </console>
	I0210 13:08:02.687335  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     <rng model='virtio'>
	I0210 13:08:02.687348  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)       <backend model='random'>/dev/random</backend>
	I0210 13:08:02.687355  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     </rng>
	I0210 13:08:02.687375  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     
	I0210 13:08:02.687392  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)     
	I0210 13:08:02.687403  668142 main.go:141] libmachine: (kubernetes-upgrade-284631)   </devices>
	I0210 13:08:02.687413  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) </domain>
	I0210 13:08:02.687423  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) 
	I0210 13:08:02.691562  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:e1:36:27 in network default
	I0210 13:08:02.692215  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) starting domain...
	I0210 13:08:02.692243  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:02.692253  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) ensuring networks are active...
	I0210 13:08:02.693028  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Ensuring network default is active
	I0210 13:08:02.693439  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Ensuring network mk-kubernetes-upgrade-284631 is active
	I0210 13:08:02.693982  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) getting domain XML...
	I0210 13:08:02.694804  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) creating domain...
	I0210 13:08:04.118489  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) waiting for IP...
	I0210 13:08:04.119333  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:04.119761  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | unable to find current IP address of domain kubernetes-upgrade-284631 in network mk-kubernetes-upgrade-284631
	I0210 13:08:04.119834  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | I0210 13:08:04.119757  668361 retry.go:31] will retry after 230.253038ms: waiting for domain to come up
	I0210 13:08:04.353596  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:04.354277  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | unable to find current IP address of domain kubernetes-upgrade-284631 in network mk-kubernetes-upgrade-284631
	I0210 13:08:04.354311  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | I0210 13:08:04.354205  668361 retry.go:31] will retry after 332.360296ms: waiting for domain to come up
	I0210 13:08:05.050529  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:05.051036  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | unable to find current IP address of domain kubernetes-upgrade-284631 in network mk-kubernetes-upgrade-284631
	I0210 13:08:05.051082  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | I0210 13:08:05.051044  668361 retry.go:31] will retry after 364.296181ms: waiting for domain to come up
	I0210 13:08:05.416739  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:05.417495  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | unable to find current IP address of domain kubernetes-upgrade-284631 in network mk-kubernetes-upgrade-284631
	I0210 13:08:05.417517  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | I0210 13:08:05.417456  668361 retry.go:31] will retry after 424.067148ms: waiting for domain to come up
	I0210 13:08:05.842946  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:05.843513  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | unable to find current IP address of domain kubernetes-upgrade-284631 in network mk-kubernetes-upgrade-284631
	I0210 13:08:05.843543  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | I0210 13:08:05.843489  668361 retry.go:31] will retry after 527.148805ms: waiting for domain to come up
	I0210 13:08:06.372095  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:06.372833  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | unable to find current IP address of domain kubernetes-upgrade-284631 in network mk-kubernetes-upgrade-284631
	I0210 13:08:06.372882  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | I0210 13:08:06.372787  668361 retry.go:31] will retry after 766.851132ms: waiting for domain to come up
	I0210 13:08:07.141451  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:07.141982  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | unable to find current IP address of domain kubernetes-upgrade-284631 in network mk-kubernetes-upgrade-284631
	I0210 13:08:07.142063  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | I0210 13:08:07.141960  668361 retry.go:31] will retry after 848.335462ms: waiting for domain to come up
	I0210 13:08:07.992199  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:07.992740  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | unable to find current IP address of domain kubernetes-upgrade-284631 in network mk-kubernetes-upgrade-284631
	I0210 13:08:07.992770  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | I0210 13:08:07.992698  668361 retry.go:31] will retry after 1.259565841s: waiting for domain to come up
	I0210 13:08:09.254070  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:09.254531  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | unable to find current IP address of domain kubernetes-upgrade-284631 in network mk-kubernetes-upgrade-284631
	I0210 13:08:09.254577  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | I0210 13:08:09.254519  668361 retry.go:31] will retry after 1.642876384s: waiting for domain to come up
	I0210 13:08:10.899366  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:10.899825  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | unable to find current IP address of domain kubernetes-upgrade-284631 in network mk-kubernetes-upgrade-284631
	I0210 13:08:10.899854  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | I0210 13:08:10.899779  668361 retry.go:31] will retry after 1.945103516s: waiting for domain to come up
	I0210 13:08:12.846178  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:12.846627  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | unable to find current IP address of domain kubernetes-upgrade-284631 in network mk-kubernetes-upgrade-284631
	I0210 13:08:12.846654  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | I0210 13:08:12.846584  668361 retry.go:31] will retry after 2.431635412s: waiting for domain to come up
	I0210 13:08:15.281153  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:15.281568  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | unable to find current IP address of domain kubernetes-upgrade-284631 in network mk-kubernetes-upgrade-284631
	I0210 13:08:15.281597  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | I0210 13:08:15.281549  668361 retry.go:31] will retry after 3.197100311s: waiting for domain to come up
	I0210 13:08:18.481071  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:18.481525  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | unable to find current IP address of domain kubernetes-upgrade-284631 in network mk-kubernetes-upgrade-284631
	I0210 13:08:18.481556  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | I0210 13:08:18.481483  668361 retry.go:31] will retry after 3.502120187s: waiting for domain to come up
	I0210 13:08:21.986200  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:21.986638  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | unable to find current IP address of domain kubernetes-upgrade-284631 in network mk-kubernetes-upgrade-284631
	I0210 13:08:21.986679  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | I0210 13:08:21.986590  668361 retry.go:31] will retry after 5.05067634s: waiting for domain to come up
	I0210 13:08:27.042945  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:27.043328  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) found domain IP: 192.168.50.25
	I0210 13:08:27.043353  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) reserving static IP address...
	I0210 13:08:27.043368  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has current primary IP address 192.168.50.25 and MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:27.043743  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-284631", mac: "52:54:00:c8:50:79", ip: "192.168.50.25"} in network mk-kubernetes-upgrade-284631
	I0210 13:08:27.121344  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | Getting to WaitForSSH function...
	I0210 13:08:27.121386  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) reserved static IP address 192.168.50.25 for domain kubernetes-upgrade-284631
	I0210 13:08:27.121400  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) waiting for SSH...
	I0210 13:08:27.124177  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:27.124655  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:50:79", ip: ""} in network mk-kubernetes-upgrade-284631: {Iface:virbr3 ExpiryTime:2025-02-10 14:08:17 +0000 UTC Type:0 Mac:52:54:00:c8:50:79 Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:50:79}
	I0210 13:08:27.124692  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined IP address 192.168.50.25 and MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:27.124874  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | Using SSH client type: external
	I0210 13:08:27.124910  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | Using SSH private key: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/kubernetes-upgrade-284631/id_rsa (-rw-------)
	I0210 13:08:27.124946  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.25 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20383-625153/.minikube/machines/kubernetes-upgrade-284631/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 13:08:27.124968  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | About to run SSH command:
	I0210 13:08:27.124979  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | exit 0
	I0210 13:08:27.248998  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | SSH cmd err, output: <nil>: 
	I0210 13:08:27.249262  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) KVM machine creation complete
	I0210 13:08:27.249632  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetConfigRaw
	I0210 13:08:27.250250  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .DriverName
	I0210 13:08:27.250413  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .DriverName
	I0210 13:08:27.250571  668142 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0210 13:08:27.250585  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetState
	I0210 13:08:27.252002  668142 main.go:141] libmachine: Detecting operating system of created instance...
	I0210 13:08:27.252020  668142 main.go:141] libmachine: Waiting for SSH to be available...
	I0210 13:08:27.252028  668142 main.go:141] libmachine: Getting to WaitForSSH function...
	I0210 13:08:27.252036  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHHostname
	I0210 13:08:27.254397  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:27.254781  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:50:79", ip: ""} in network mk-kubernetes-upgrade-284631: {Iface:virbr3 ExpiryTime:2025-02-10 14:08:17 +0000 UTC Type:0 Mac:52:54:00:c8:50:79 Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:kubernetes-upgrade-284631 Clientid:01:52:54:00:c8:50:79}
	I0210 13:08:27.254821  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined IP address 192.168.50.25 and MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:27.254982  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHPort
	I0210 13:08:27.255157  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHKeyPath
	I0210 13:08:27.255313  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHKeyPath
	I0210 13:08:27.255434  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHUsername
	I0210 13:08:27.255599  668142 main.go:141] libmachine: Using SSH client type: native
	I0210 13:08:27.255820  668142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I0210 13:08:27.255834  668142 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0210 13:08:27.351977  668142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 13:08:27.352008  668142 main.go:141] libmachine: Detecting the provisioner...
	I0210 13:08:27.352019  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHHostname
	I0210 13:08:27.354908  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:27.355342  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:50:79", ip: ""} in network mk-kubernetes-upgrade-284631: {Iface:virbr3 ExpiryTime:2025-02-10 14:08:17 +0000 UTC Type:0 Mac:52:54:00:c8:50:79 Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:kubernetes-upgrade-284631 Clientid:01:52:54:00:c8:50:79}
	I0210 13:08:27.355386  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined IP address 192.168.50.25 and MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:27.355604  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHPort
	I0210 13:08:27.355818  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHKeyPath
	I0210 13:08:27.356001  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHKeyPath
	I0210 13:08:27.356114  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHUsername
	I0210 13:08:27.356300  668142 main.go:141] libmachine: Using SSH client type: native
	I0210 13:08:27.356506  668142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I0210 13:08:27.356517  668142 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0210 13:08:27.453786  668142 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0210 13:08:27.453918  668142 main.go:141] libmachine: found compatible host: buildroot
	I0210 13:08:27.453933  668142 main.go:141] libmachine: Provisioning with buildroot...
	I0210 13:08:27.453947  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetMachineName
	I0210 13:08:27.454186  668142 buildroot.go:166] provisioning hostname "kubernetes-upgrade-284631"
	I0210 13:08:27.454229  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetMachineName
	I0210 13:08:27.454445  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHHostname
	I0210 13:08:27.457393  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:27.457768  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:50:79", ip: ""} in network mk-kubernetes-upgrade-284631: {Iface:virbr3 ExpiryTime:2025-02-10 14:08:17 +0000 UTC Type:0 Mac:52:54:00:c8:50:79 Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:kubernetes-upgrade-284631 Clientid:01:52:54:00:c8:50:79}
	I0210 13:08:27.457810  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined IP address 192.168.50.25 and MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:27.457950  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHPort
	I0210 13:08:27.458115  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHKeyPath
	I0210 13:08:27.458274  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHKeyPath
	I0210 13:08:27.458395  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHUsername
	I0210 13:08:27.458556  668142 main.go:141] libmachine: Using SSH client type: native
	I0210 13:08:27.458736  668142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I0210 13:08:27.458753  668142 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-284631 && echo "kubernetes-upgrade-284631" | sudo tee /etc/hostname
	I0210 13:08:27.572225  668142 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-284631
	
	I0210 13:08:27.572256  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHHostname
	I0210 13:08:27.575369  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:27.575746  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:50:79", ip: ""} in network mk-kubernetes-upgrade-284631: {Iface:virbr3 ExpiryTime:2025-02-10 14:08:17 +0000 UTC Type:0 Mac:52:54:00:c8:50:79 Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:kubernetes-upgrade-284631 Clientid:01:52:54:00:c8:50:79}
	I0210 13:08:27.575798  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined IP address 192.168.50.25 and MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:27.576037  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHPort
	I0210 13:08:27.576274  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHKeyPath
	I0210 13:08:27.576472  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHKeyPath
	I0210 13:08:27.576583  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHUsername
	I0210 13:08:27.576776  668142 main.go:141] libmachine: Using SSH client type: native
	I0210 13:08:27.577015  668142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I0210 13:08:27.577043  668142 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-284631' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-284631/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-284631' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 13:08:27.686007  668142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 13:08:27.686049  668142 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20383-625153/.minikube CaCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20383-625153/.minikube}
	I0210 13:08:27.686107  668142 buildroot.go:174] setting up certificates
	I0210 13:08:27.686130  668142 provision.go:84] configureAuth start
	I0210 13:08:27.686149  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetMachineName
	I0210 13:08:27.686486  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetIP
	I0210 13:08:27.689470  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:27.689852  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:50:79", ip: ""} in network mk-kubernetes-upgrade-284631: {Iface:virbr3 ExpiryTime:2025-02-10 14:08:17 +0000 UTC Type:0 Mac:52:54:00:c8:50:79 Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:kubernetes-upgrade-284631 Clientid:01:52:54:00:c8:50:79}
	I0210 13:08:27.689880  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined IP address 192.168.50.25 and MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:27.690071  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHHostname
	I0210 13:08:27.692371  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:27.692735  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:50:79", ip: ""} in network mk-kubernetes-upgrade-284631: {Iface:virbr3 ExpiryTime:2025-02-10 14:08:17 +0000 UTC Type:0 Mac:52:54:00:c8:50:79 Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:kubernetes-upgrade-284631 Clientid:01:52:54:00:c8:50:79}
	I0210 13:08:27.692768  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined IP address 192.168.50.25 and MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:27.692933  668142 provision.go:143] copyHostCerts
	I0210 13:08:27.693038  668142 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem, removing ...
	I0210 13:08:27.693055  668142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem
	I0210 13:08:27.693134  668142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem (1123 bytes)
	I0210 13:08:27.693280  668142 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem, removing ...
	I0210 13:08:27.693292  668142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem
	I0210 13:08:27.693318  668142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem (1675 bytes)
	I0210 13:08:27.693384  668142 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem, removing ...
	I0210 13:08:27.693392  668142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem
	I0210 13:08:27.693409  668142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem (1082 bytes)
	I0210 13:08:27.693479  668142 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-284631 san=[127.0.0.1 192.168.50.25 kubernetes-upgrade-284631 localhost minikube]
	I0210 13:08:27.925745  668142 provision.go:177] copyRemoteCerts
	I0210 13:08:27.925817  668142 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 13:08:27.925846  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHHostname
	I0210 13:08:27.928326  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:27.928625  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:50:79", ip: ""} in network mk-kubernetes-upgrade-284631: {Iface:virbr3 ExpiryTime:2025-02-10 14:08:17 +0000 UTC Type:0 Mac:52:54:00:c8:50:79 Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:kubernetes-upgrade-284631 Clientid:01:52:54:00:c8:50:79}
	I0210 13:08:27.928661  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined IP address 192.168.50.25 and MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:27.928792  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHPort
	I0210 13:08:27.929017  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHKeyPath
	I0210 13:08:27.929179  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHUsername
	I0210 13:08:27.929286  668142 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/kubernetes-upgrade-284631/id_rsa Username:docker}
	I0210 13:08:28.011668  668142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0210 13:08:28.034714  668142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0210 13:08:28.056489  668142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 13:08:28.079833  668142 provision.go:87] duration metric: took 393.682477ms to configureAuth
	I0210 13:08:28.079871  668142 buildroot.go:189] setting minikube options for container-runtime
	I0210 13:08:28.080074  668142 config.go:182] Loaded profile config "kubernetes-upgrade-284631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0210 13:08:28.080189  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHHostname
	I0210 13:08:28.083048  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:28.083458  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:50:79", ip: ""} in network mk-kubernetes-upgrade-284631: {Iface:virbr3 ExpiryTime:2025-02-10 14:08:17 +0000 UTC Type:0 Mac:52:54:00:c8:50:79 Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:kubernetes-upgrade-284631 Clientid:01:52:54:00:c8:50:79}
	I0210 13:08:28.083488  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined IP address 192.168.50.25 and MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:28.083663  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHPort
	I0210 13:08:28.083891  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHKeyPath
	I0210 13:08:28.084086  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHKeyPath
	I0210 13:08:28.084223  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHUsername
	I0210 13:08:28.084376  668142 main.go:141] libmachine: Using SSH client type: native
	I0210 13:08:28.084580  668142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I0210 13:08:28.084598  668142 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 13:08:28.311702  668142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 13:08:28.311741  668142 main.go:141] libmachine: Checking connection to Docker...
	I0210 13:08:28.311754  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetURL
	I0210 13:08:28.313282  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | using libvirt version 6000000
	I0210 13:08:28.315839  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:28.316178  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:50:79", ip: ""} in network mk-kubernetes-upgrade-284631: {Iface:virbr3 ExpiryTime:2025-02-10 14:08:17 +0000 UTC Type:0 Mac:52:54:00:c8:50:79 Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:kubernetes-upgrade-284631 Clientid:01:52:54:00:c8:50:79}
	I0210 13:08:28.316213  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined IP address 192.168.50.25 and MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:28.316378  668142 main.go:141] libmachine: Docker is up and running!
	I0210 13:08:28.316395  668142 main.go:141] libmachine: Reticulating splines...
	I0210 13:08:28.316403  668142 client.go:171] duration metric: took 26.216773315s to LocalClient.Create
	I0210 13:08:28.316434  668142 start.go:167] duration metric: took 26.216854531s to libmachine.API.Create "kubernetes-upgrade-284631"
	I0210 13:08:28.316449  668142 start.go:293] postStartSetup for "kubernetes-upgrade-284631" (driver="kvm2")
	I0210 13:08:28.316466  668142 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 13:08:28.316489  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .DriverName
	I0210 13:08:28.316732  668142 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 13:08:28.316756  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHHostname
	I0210 13:08:28.319708  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:28.320078  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:50:79", ip: ""} in network mk-kubernetes-upgrade-284631: {Iface:virbr3 ExpiryTime:2025-02-10 14:08:17 +0000 UTC Type:0 Mac:52:54:00:c8:50:79 Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:kubernetes-upgrade-284631 Clientid:01:52:54:00:c8:50:79}
	I0210 13:08:28.320149  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined IP address 192.168.50.25 and MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:28.320340  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHPort
	I0210 13:08:28.320527  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHKeyPath
	I0210 13:08:28.320670  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHUsername
	I0210 13:08:28.320811  668142 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/kubernetes-upgrade-284631/id_rsa Username:docker}
	I0210 13:08:28.400557  668142 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 13:08:28.404660  668142 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 13:08:28.404680  668142 filesync.go:126] Scanning /home/jenkins/minikube-integration/20383-625153/.minikube/addons for local assets ...
	I0210 13:08:28.404739  668142 filesync.go:126] Scanning /home/jenkins/minikube-integration/20383-625153/.minikube/files for local assets ...
	I0210 13:08:28.404809  668142 filesync.go:149] local asset: /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem -> 6323522.pem in /etc/ssl/certs
	I0210 13:08:28.404891  668142 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 13:08:28.414597  668142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem --> /etc/ssl/certs/6323522.pem (1708 bytes)
	I0210 13:08:28.436526  668142 start.go:296] duration metric: took 120.058315ms for postStartSetup
	I0210 13:08:28.436608  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetConfigRaw
	I0210 13:08:28.437296  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetIP
	I0210 13:08:28.440109  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:28.440445  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:50:79", ip: ""} in network mk-kubernetes-upgrade-284631: {Iface:virbr3 ExpiryTime:2025-02-10 14:08:17 +0000 UTC Type:0 Mac:52:54:00:c8:50:79 Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:kubernetes-upgrade-284631 Clientid:01:52:54:00:c8:50:79}
	I0210 13:08:28.440480  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined IP address 192.168.50.25 and MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:28.440646  668142 profile.go:143] Saving config to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/config.json ...
	I0210 13:08:28.440831  668142 start.go:128] duration metric: took 26.366651518s to createHost
	I0210 13:08:28.440853  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHHostname
	I0210 13:08:28.443346  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:28.443633  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:50:79", ip: ""} in network mk-kubernetes-upgrade-284631: {Iface:virbr3 ExpiryTime:2025-02-10 14:08:17 +0000 UTC Type:0 Mac:52:54:00:c8:50:79 Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:kubernetes-upgrade-284631 Clientid:01:52:54:00:c8:50:79}
	I0210 13:08:28.443663  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined IP address 192.168.50.25 and MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:28.443764  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHPort
	I0210 13:08:28.443953  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHKeyPath
	I0210 13:08:28.444129  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHKeyPath
	I0210 13:08:28.444291  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHUsername
	I0210 13:08:28.444461  668142 main.go:141] libmachine: Using SSH client type: native
	I0210 13:08:28.444621  668142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I0210 13:08:28.444631  668142 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 13:08:28.545603  668142 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739192908.523892713
	
	I0210 13:08:28.545636  668142 fix.go:216] guest clock: 1739192908.523892713
	I0210 13:08:28.545646  668142 fix.go:229] Guest: 2025-02-10 13:08:28.523892713 +0000 UTC Remote: 2025-02-10 13:08:28.440843744 +0000 UTC m=+48.085363756 (delta=83.048969ms)
	I0210 13:08:28.545700  668142 fix.go:200] guest clock delta is within tolerance: 83.048969ms
	I0210 13:08:28.545710  668142 start.go:83] releasing machines lock for "kubernetes-upgrade-284631", held for 26.471760047s
	I0210 13:08:28.545742  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .DriverName
	I0210 13:08:28.546000  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetIP
	I0210 13:08:28.548657  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:28.549066  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:50:79", ip: ""} in network mk-kubernetes-upgrade-284631: {Iface:virbr3 ExpiryTime:2025-02-10 14:08:17 +0000 UTC Type:0 Mac:52:54:00:c8:50:79 Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:kubernetes-upgrade-284631 Clientid:01:52:54:00:c8:50:79}
	I0210 13:08:28.549098  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined IP address 192.168.50.25 and MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:28.549259  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .DriverName
	I0210 13:08:28.549790  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .DriverName
	I0210 13:08:28.549993  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .DriverName
	I0210 13:08:28.550085  668142 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 13:08:28.550133  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHHostname
	I0210 13:08:28.550238  668142 ssh_runner.go:195] Run: cat /version.json
	I0210 13:08:28.550266  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHHostname
	I0210 13:08:28.552616  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:28.552944  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:50:79", ip: ""} in network mk-kubernetes-upgrade-284631: {Iface:virbr3 ExpiryTime:2025-02-10 14:08:17 +0000 UTC Type:0 Mac:52:54:00:c8:50:79 Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:kubernetes-upgrade-284631 Clientid:01:52:54:00:c8:50:79}
	I0210 13:08:28.552983  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined IP address 192.168.50.25 and MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:28.553038  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:28.553185  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHPort
	I0210 13:08:28.553377  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHKeyPath
	I0210 13:08:28.553542  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHUsername
	I0210 13:08:28.553579  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:50:79", ip: ""} in network mk-kubernetes-upgrade-284631: {Iface:virbr3 ExpiryTime:2025-02-10 14:08:17 +0000 UTC Type:0 Mac:52:54:00:c8:50:79 Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:kubernetes-upgrade-284631 Clientid:01:52:54:00:c8:50:79}
	I0210 13:08:28.553609  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined IP address 192.168.50.25 and MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:28.553691  668142 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/kubernetes-upgrade-284631/id_rsa Username:docker}
	I0210 13:08:28.553773  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHPort
	I0210 13:08:28.553917  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHKeyPath
	I0210 13:08:28.554095  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHUsername
	I0210 13:08:28.554257  668142 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/kubernetes-upgrade-284631/id_rsa Username:docker}
	I0210 13:08:28.650278  668142 ssh_runner.go:195] Run: systemctl --version
	I0210 13:08:28.656192  668142 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 13:08:28.812162  668142 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 13:08:28.820503  668142 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 13:08:28.820596  668142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 13:08:28.839188  668142 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 13:08:28.839229  668142 start.go:495] detecting cgroup driver to use...
	I0210 13:08:28.839300  668142 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 13:08:28.855738  668142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 13:08:28.869672  668142 docker.go:217] disabling cri-docker service (if available) ...
	I0210 13:08:28.869741  668142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 13:08:28.884689  668142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 13:08:28.898779  668142 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 13:08:29.015877  668142 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 13:08:29.180461  668142 docker.go:233] disabling docker service ...
	I0210 13:08:29.180552  668142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 13:08:29.194614  668142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 13:08:29.206773  668142 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 13:08:29.331886  668142 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 13:08:29.471921  668142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 13:08:29.486886  668142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 13:08:29.507251  668142 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0210 13:08:29.507328  668142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:08:29.520863  668142 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 13:08:29.520941  668142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:08:29.531553  668142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:08:29.541489  668142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:08:29.551516  668142 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 13:08:29.561671  668142 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 13:08:29.570698  668142 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 13:08:29.570855  668142 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 13:08:29.583643  668142 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 13:08:29.592301  668142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:08:29.711061  668142 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 13:08:29.799416  668142 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 13:08:29.799496  668142 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 13:08:29.803953  668142 start.go:563] Will wait 60s for crictl version
	I0210 13:08:29.804016  668142 ssh_runner.go:195] Run: which crictl
	I0210 13:08:29.807414  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 13:08:29.848771  668142 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 13:08:29.848859  668142 ssh_runner.go:195] Run: crio --version
	I0210 13:08:29.874980  668142 ssh_runner.go:195] Run: crio --version
	I0210 13:08:29.902695  668142 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0210 13:08:29.903998  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetIP
	I0210 13:08:29.906969  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:29.907314  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:50:79", ip: ""} in network mk-kubernetes-upgrade-284631: {Iface:virbr3 ExpiryTime:2025-02-10 14:08:17 +0000 UTC Type:0 Mac:52:54:00:c8:50:79 Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:kubernetes-upgrade-284631 Clientid:01:52:54:00:c8:50:79}
	I0210 13:08:29.907344  668142 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined IP address 192.168.50.25 and MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:08:29.907600  668142 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0210 13:08:29.911542  668142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:08:29.923373  668142 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-284631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-284631 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 13:08:29.923497  668142 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 13:08:29.923546  668142 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:08:29.955391  668142 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0210 13:08:29.955475  668142 ssh_runner.go:195] Run: which lz4
	I0210 13:08:29.959349  668142 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 13:08:29.963327  668142 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 13:08:29.963369  668142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0210 13:08:31.326084  668142 crio.go:462] duration metric: took 1.36676821s to copy over tarball
	I0210 13:08:31.326165  668142 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 13:08:33.777293  668142 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.45109097s)
	I0210 13:08:33.777339  668142 crio.go:469] duration metric: took 2.451223902s to extract the tarball
	I0210 13:08:33.777349  668142 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 13:08:33.818509  668142 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:08:33.865210  668142 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0210 13:08:33.865243  668142 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0210 13:08:33.865308  668142 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:08:33.865354  668142 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:08:33.865365  668142 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0210 13:08:33.865317  668142 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:08:33.865395  668142 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:08:33.865399  668142 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0210 13:08:33.865407  668142 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0210 13:08:33.865445  668142 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:08:33.867003  668142 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:08:33.867003  668142 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:08:33.867108  668142 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0210 13:08:33.867001  668142 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0210 13:08:33.867003  668142 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:08:33.867003  668142 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:08:33.867003  668142 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:08:33.867014  668142 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0210 13:08:34.017490  668142 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0210 13:08:34.057811  668142 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0210 13:08:34.057881  668142 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0210 13:08:34.057934  668142 ssh_runner.go:195] Run: which crictl
	I0210 13:08:34.060570  668142 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:08:34.062001  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 13:08:34.082781  668142 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0210 13:08:34.083545  668142 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0210 13:08:34.087006  668142 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:08:34.092483  668142 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:08:34.097291  668142 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:08:34.147794  668142 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0210 13:08:34.147854  668142 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:08:34.147869  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 13:08:34.147895  668142 ssh_runner.go:195] Run: which crictl
	I0210 13:08:34.232190  668142 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0210 13:08:34.232250  668142 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0210 13:08:34.232313  668142 ssh_runner.go:195] Run: which crictl
	I0210 13:08:34.244933  668142 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0210 13:08:34.244952  668142 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0210 13:08:34.244990  668142 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0210 13:08:34.244990  668142 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:08:34.245013  668142 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0210 13:08:34.245040  668142 ssh_runner.go:195] Run: which crictl
	I0210 13:08:34.245047  668142 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:08:34.245075  668142 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0210 13:08:34.245041  668142 ssh_runner.go:195] Run: which crictl
	I0210 13:08:34.245090  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:08:34.245116  668142 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:08:34.245081  668142 ssh_runner.go:195] Run: which crictl
	I0210 13:08:34.245158  668142 ssh_runner.go:195] Run: which crictl
	I0210 13:08:34.268539  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 13:08:34.268587  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 13:08:34.303108  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:08:34.303145  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 13:08:34.303186  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:08:34.303260  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:08:34.303264  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:08:34.364269  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 13:08:34.364284  668142 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0210 13:08:34.431922  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:08:34.431975  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:08:34.431984  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 13:08:34.431923  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:08:34.432030  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:08:34.469099  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 13:08:34.567395  668142 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0210 13:08:34.567511  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:08:34.567547  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:08:34.567571  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 13:08:34.567643  668142 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:08:34.569230  668142 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0210 13:08:34.642377  668142 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0210 13:08:34.651025  668142 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0210 13:08:34.656420  668142 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0210 13:08:34.656513  668142 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0210 13:08:34.873112  668142 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:08:35.017614  668142 cache_images.go:92] duration metric: took 1.152353269s to LoadCachedImages
	W0210 13:08:35.017738  668142 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0210 13:08:35.017767  668142 kubeadm.go:934] updating node { 192.168.50.25 8443 v1.20.0 crio true true} ...
	I0210 13:08:35.017905  668142 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-284631 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-284631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 13:08:35.017997  668142 ssh_runner.go:195] Run: crio config
	I0210 13:08:35.076538  668142 cni.go:84] Creating CNI manager for ""
	I0210 13:08:35.076566  668142 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:08:35.076579  668142 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 13:08:35.076604  668142 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.25 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-284631 NodeName:kubernetes-upgrade-284631 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0210 13:08:35.076787  668142 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-284631"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 13:08:35.076874  668142 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0210 13:08:35.086823  668142 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 13:08:35.086911  668142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 13:08:35.096537  668142 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0210 13:08:35.114971  668142 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 13:08:35.133140  668142 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0210 13:08:35.149324  668142 ssh_runner.go:195] Run: grep 192.168.50.25	control-plane.minikube.internal$ /etc/hosts
	I0210 13:08:35.153076  668142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:08:35.167767  668142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:08:35.277303  668142 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:08:35.297408  668142 certs.go:68] Setting up /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631 for IP: 192.168.50.25
	I0210 13:08:35.297458  668142 certs.go:194] generating shared ca certs ...
	I0210 13:08:35.297504  668142 certs.go:226] acquiring lock for ca certs: {Name:mkf2f72f82a6bd7e3c16bb224cd26b80c3c89e0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:08:35.297668  668142 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key
	I0210 13:08:35.297733  668142 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key
	I0210 13:08:35.297749  668142 certs.go:256] generating profile certs ...
	I0210 13:08:35.297827  668142 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/client.key
	I0210 13:08:35.297847  668142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/client.crt with IP's: []
	I0210 13:08:35.481129  668142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/client.crt ...
	I0210 13:08:35.481168  668142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/client.crt: {Name:mkc21c958813967f77881600bef74844e165f65b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:08:35.481369  668142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/client.key ...
	I0210 13:08:35.481392  668142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/client.key: {Name:mk532f6f8b1229b5363802d7da449a8134f06499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:08:35.481511  668142 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/apiserver.key.8e8af368
	I0210 13:08:35.481535  668142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/apiserver.crt.8e8af368 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.25]
	I0210 13:08:35.602186  668142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/apiserver.crt.8e8af368 ...
	I0210 13:08:35.602235  668142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/apiserver.crt.8e8af368: {Name:mkca8c86339fa0a3efb77059058391615bee6a6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:08:35.602501  668142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/apiserver.key.8e8af368 ...
	I0210 13:08:35.602532  668142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/apiserver.key.8e8af368: {Name:mk7c67c8ba4a18bf5ce573cf35c4ccd77719d03e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:08:35.602687  668142 certs.go:381] copying /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/apiserver.crt.8e8af368 -> /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/apiserver.crt
	I0210 13:08:35.602838  668142 certs.go:385] copying /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/apiserver.key.8e8af368 -> /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/apiserver.key
	I0210 13:08:35.602952  668142 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/proxy-client.key
	I0210 13:08:35.603084  668142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/proxy-client.crt with IP's: []
	I0210 13:08:35.863812  668142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/proxy-client.crt ...
	I0210 13:08:35.863870  668142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/proxy-client.crt: {Name:mke680ab8efffaaf76958b7e8d94356e6b4b2755 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:08:35.864146  668142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/proxy-client.key ...
	I0210 13:08:35.864185  668142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/proxy-client.key: {Name:mk1234b7b3cbfb8a2959712c4f65c238f184af7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:08:35.864511  668142 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352.pem (1338 bytes)
	W0210 13:08:35.864579  668142 certs.go:480] ignoring /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352_empty.pem, impossibly tiny 0 bytes
	I0210 13:08:35.864596  668142 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem (1675 bytes)
	I0210 13:08:35.864632  668142 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem (1082 bytes)
	I0210 13:08:35.864672  668142 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem (1123 bytes)
	I0210 13:08:35.864708  668142 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem (1675 bytes)
	I0210 13:08:35.864772  668142 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem (1708 bytes)
	I0210 13:08:35.865801  668142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 13:08:35.894040  668142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 13:08:35.921367  668142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 13:08:35.948373  668142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 13:08:35.974721  668142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0210 13:08:36.071501  668142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0210 13:08:36.099393  668142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 13:08:36.129486  668142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 13:08:36.202105  668142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352.pem --> /usr/share/ca-certificates/632352.pem (1338 bytes)
	I0210 13:08:36.227578  668142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem --> /usr/share/ca-certificates/6323522.pem (1708 bytes)
	I0210 13:08:36.260090  668142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 13:08:36.283144  668142 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 13:08:36.299548  668142 ssh_runner.go:195] Run: openssl version
	I0210 13:08:36.307411  668142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6323522.pem && ln -fs /usr/share/ca-certificates/6323522.pem /etc/ssl/certs/6323522.pem"
	I0210 13:08:36.318366  668142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6323522.pem
	I0210 13:08:36.323852  668142 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 12:13 /usr/share/ca-certificates/6323522.pem
	I0210 13:08:36.323925  668142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6323522.pem
	I0210 13:08:36.333569  668142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6323522.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 13:08:36.347050  668142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 13:08:36.363003  668142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:08:36.370296  668142 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:08:36.370377  668142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:08:36.378029  668142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 13:08:36.390782  668142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/632352.pem && ln -fs /usr/share/ca-certificates/632352.pem /etc/ssl/certs/632352.pem"
	I0210 13:08:36.402617  668142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/632352.pem
	I0210 13:08:36.407020  668142 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 12:13 /usr/share/ca-certificates/632352.pem
	I0210 13:08:36.407086  668142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/632352.pem
	I0210 13:08:36.412657  668142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/632352.pem /etc/ssl/certs/51391683.0"
	I0210 13:08:36.424469  668142 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 13:08:36.428326  668142 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 13:08:36.428387  668142 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-284631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-284631 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:08:36.428484  668142 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 13:08:36.428541  668142 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:08:36.468935  668142 cri.go:89] found id: ""
	I0210 13:08:36.469009  668142 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 13:08:36.479010  668142 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 13:08:36.490422  668142 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:08:36.501427  668142 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:08:36.501451  668142 kubeadm.go:157] found existing configuration files:
	
	I0210 13:08:36.501499  668142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:08:36.511219  668142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:08:36.511284  668142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:08:36.520565  668142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:08:36.530543  668142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:08:36.530603  668142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:08:36.540831  668142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:08:36.550514  668142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:08:36.550567  668142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:08:36.560381  668142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:08:36.568717  668142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:08:36.568775  668142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:08:36.577497  668142 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 13:08:36.705792  668142 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 13:08:36.705889  668142 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 13:08:36.852173  668142 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 13:08:36.852340  668142 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 13:08:36.852521  668142 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 13:08:37.041670  668142 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 13:08:37.136225  668142 out.go:235]   - Generating certificates and keys ...
	I0210 13:08:37.136388  668142 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 13:08:37.136482  668142 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 13:08:37.149770  668142 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0210 13:08:37.282739  668142 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0210 13:08:37.369257  668142 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0210 13:08:37.473196  668142 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0210 13:08:37.733878  668142 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0210 13:08:37.734152  668142 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-284631 localhost] and IPs [192.168.50.25 127.0.0.1 ::1]
	I0210 13:08:37.969948  668142 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0210 13:08:37.970404  668142 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-284631 localhost] and IPs [192.168.50.25 127.0.0.1 ::1]
	I0210 13:08:38.347115  668142 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0210 13:08:38.653299  668142 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0210 13:08:38.726801  668142 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0210 13:08:38.727318  668142 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 13:08:38.989380  668142 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 13:08:39.101735  668142 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 13:08:39.249410  668142 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 13:08:39.402408  668142 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 13:08:39.421151  668142 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 13:08:39.423029  668142 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 13:08:39.423146  668142 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 13:08:39.613694  668142 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 13:08:39.615637  668142 out.go:235]   - Booting up control plane ...
	I0210 13:08:39.615783  668142 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 13:08:39.628430  668142 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 13:08:39.629652  668142 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 13:08:39.633973  668142 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 13:08:39.644292  668142 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 13:09:19.637410  668142 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 13:09:19.638252  668142 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:09:19.638511  668142 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:09:24.638870  668142 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:09:24.639183  668142 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:09:34.639044  668142 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:09:34.639299  668142 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:09:54.639056  668142 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:09:54.639331  668142 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:10:34.640334  668142 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:10:34.640667  668142 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:10:34.640693  668142 kubeadm.go:310] 
	I0210 13:10:34.640755  668142 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 13:10:34.640824  668142 kubeadm.go:310] 		timed out waiting for the condition
	I0210 13:10:34.640834  668142 kubeadm.go:310] 
	I0210 13:10:34.640900  668142 kubeadm.go:310] 	This error is likely caused by:
	I0210 13:10:34.640960  668142 kubeadm.go:310] 		- The kubelet is not running
	I0210 13:10:34.641087  668142 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 13:10:34.641119  668142 kubeadm.go:310] 
	I0210 13:10:34.641238  668142 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 13:10:34.641310  668142 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 13:10:34.641396  668142 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 13:10:34.641425  668142 kubeadm.go:310] 
	I0210 13:10:34.641569  668142 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 13:10:34.641675  668142 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 13:10:34.641689  668142 kubeadm.go:310] 
	I0210 13:10:34.641811  668142 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 13:10:34.641941  668142 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 13:10:34.642052  668142 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 13:10:34.642160  668142 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 13:10:34.642172  668142 kubeadm.go:310] 
	I0210 13:10:34.642968  668142 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 13:10:34.643102  668142 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 13:10:34.643203  668142 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0210 13:10:34.643386  668142 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-284631 localhost] and IPs [192.168.50.25 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-284631 localhost] and IPs [192.168.50.25 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-284631 localhost] and IPs [192.168.50.25 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-284631 localhost] and IPs [192.168.50.25 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0210 13:10:34.643442  668142 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 13:10:36.228269  668142 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.584786797s)
	I0210 13:10:36.228356  668142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:10:36.243563  668142 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:10:36.255269  668142 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:10:36.255292  668142 kubeadm.go:157] found existing configuration files:
	
	I0210 13:10:36.255345  668142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:10:36.265557  668142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:10:36.265637  668142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:10:36.275031  668142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:10:36.284422  668142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:10:36.284493  668142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:10:36.293695  668142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:10:36.303598  668142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:10:36.303706  668142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:10:36.313908  668142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:10:36.323197  668142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:10:36.323255  668142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:10:36.332068  668142 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 13:10:36.394989  668142 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 13:10:36.395077  668142 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 13:10:36.531333  668142 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 13:10:36.531487  668142 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 13:10:36.531649  668142 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 13:10:36.723805  668142 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 13:10:36.725611  668142 out.go:235]   - Generating certificates and keys ...
	I0210 13:10:36.725729  668142 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 13:10:36.725841  668142 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 13:10:36.725957  668142 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 13:10:36.726034  668142 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 13:10:36.726152  668142 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 13:10:36.726238  668142 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 13:10:36.726347  668142 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 13:10:36.726508  668142 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 13:10:36.727022  668142 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 13:10:36.727414  668142 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 13:10:36.727494  668142 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 13:10:36.727580  668142 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 13:10:36.927810  668142 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 13:10:37.049633  668142 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 13:10:37.132910  668142 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 13:10:37.561848  668142 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 13:10:37.575849  668142 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 13:10:37.577548  668142 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 13:10:37.577617  668142 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 13:10:37.708091  668142 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 13:10:37.710776  668142 out.go:235]   - Booting up control plane ...
	I0210 13:10:37.710880  668142 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 13:10:37.714208  668142 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 13:10:37.714956  668142 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 13:10:37.715765  668142 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 13:10:37.717730  668142 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 13:11:17.720196  668142 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 13:11:17.720544  668142 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:11:17.720792  668142 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:11:22.721221  668142 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:11:22.721420  668142 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:11:32.721926  668142 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:11:32.722214  668142 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:11:52.721525  668142 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:11:52.721737  668142 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:12:32.721515  668142 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:12:32.721764  668142 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:12:32.721779  668142 kubeadm.go:310] 
	I0210 13:12:32.721835  668142 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 13:12:32.721891  668142 kubeadm.go:310] 		timed out waiting for the condition
	I0210 13:12:32.721901  668142 kubeadm.go:310] 
	I0210 13:12:32.721947  668142 kubeadm.go:310] 	This error is likely caused by:
	I0210 13:12:32.721993  668142 kubeadm.go:310] 		- The kubelet is not running
	I0210 13:12:32.722132  668142 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 13:12:32.722161  668142 kubeadm.go:310] 
	I0210 13:12:32.722307  668142 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 13:12:32.722358  668142 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 13:12:32.722405  668142 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 13:12:32.722414  668142 kubeadm.go:310] 
	I0210 13:12:32.722567  668142 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 13:12:32.722699  668142 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 13:12:32.722722  668142 kubeadm.go:310] 
	I0210 13:12:32.722904  668142 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 13:12:32.723080  668142 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 13:12:32.723190  668142 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 13:12:32.723288  668142 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 13:12:32.723300  668142 kubeadm.go:310] 
	I0210 13:12:32.724433  668142 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 13:12:32.724547  668142 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 13:12:32.724636  668142 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 13:12:32.724704  668142 kubeadm.go:394] duration metric: took 3m56.296324697s to StartCluster
	I0210 13:12:32.724749  668142 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:12:32.724819  668142 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:12:32.788587  668142 cri.go:89] found id: ""
	I0210 13:12:32.788619  668142 logs.go:282] 0 containers: []
	W0210 13:12:32.788630  668142 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:12:32.788639  668142 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:12:32.788708  668142 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:12:32.834472  668142 cri.go:89] found id: ""
	I0210 13:12:32.834506  668142 logs.go:282] 0 containers: []
	W0210 13:12:32.834517  668142 logs.go:284] No container was found matching "etcd"
	I0210 13:12:32.834526  668142 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:12:32.834590  668142 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:12:32.876148  668142 cri.go:89] found id: ""
	I0210 13:12:32.876187  668142 logs.go:282] 0 containers: []
	W0210 13:12:32.876197  668142 logs.go:284] No container was found matching "coredns"
	I0210 13:12:32.876203  668142 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:12:32.876261  668142 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:12:32.922653  668142 cri.go:89] found id: ""
	I0210 13:12:32.922687  668142 logs.go:282] 0 containers: []
	W0210 13:12:32.922695  668142 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:12:32.922702  668142 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:12:32.922772  668142 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:12:32.968240  668142 cri.go:89] found id: ""
	I0210 13:12:32.968275  668142 logs.go:282] 0 containers: []
	W0210 13:12:32.968287  668142 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:12:32.968296  668142 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:12:32.968372  668142 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:12:33.016891  668142 cri.go:89] found id: ""
	I0210 13:12:33.016929  668142 logs.go:282] 0 containers: []
	W0210 13:12:33.016942  668142 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:12:33.016951  668142 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:12:33.017025  668142 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:12:33.064582  668142 cri.go:89] found id: ""
	I0210 13:12:33.064617  668142 logs.go:282] 0 containers: []
	W0210 13:12:33.064631  668142 logs.go:284] No container was found matching "kindnet"
	I0210 13:12:33.064648  668142 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:12:33.064677  668142 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:12:33.191831  668142 logs.go:123] Gathering logs for container status ...
	I0210 13:12:33.191897  668142 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:12:33.250509  668142 logs.go:123] Gathering logs for kubelet ...
	I0210 13:12:33.250562  668142 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:12:33.313604  668142 logs.go:123] Gathering logs for dmesg ...
	I0210 13:12:33.313654  668142 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:12:33.332800  668142 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:12:33.332840  668142 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:12:33.786142  668142 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0210 13:12:33.786180  668142 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0210 13:12:33.786239  668142 out.go:270] * 
	* 
	W0210 13:12:33.786316  668142 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 13:12:33.786339  668142 out.go:270] * 
	* 
	W0210 13:12:33.787702  668142 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 13:12:33.792428  668142 out.go:201] 
	W0210 13:12:33.793690  668142 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 13:12:33.793768  668142 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0210 13:12:33.793801  668142 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0210 13:12:33.795195  668142 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-284631 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-284631
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-284631: (1.955811858s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-284631 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-284631 status --format={{.Host}}: exit status 7 (79.690925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-284631 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0210 13:12:36.337668  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-284631 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.51000349s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-284631 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-284631 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-284631 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (96.438373ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-284631] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20383
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-284631
	    minikube start -p kubernetes-upgrade-284631 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2846312 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-284631 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-284631 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-284631 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (6m4.55875224s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-02-10 13:19:46.149055316 +0000 UTC m=+4474.528909368
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-284631 -n kubernetes-upgrade-284631
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-284631 logs -n 25
E0210 13:19:46.713812  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-284631 logs -n 25: (1.220764966s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-651187 sudo                                 | flannel-651187         | jenkins | v1.35.0 | 10 Feb 25 13:15 UTC | 10 Feb 25 13:15 UTC |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p bridge-651187 sudo                                  | bridge-651187          | jenkins | v1.35.0 | 10 Feb 25 13:15 UTC |                     |
	|         | systemctl status containerd                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p flannel-651187 sudo                                 | flannel-651187         | jenkins | v1.35.0 | 10 Feb 25 13:15 UTC | 10 Feb 25 13:15 UTC |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-651187 sudo                                  | bridge-651187          | jenkins | v1.35.0 | 10 Feb 25 13:15 UTC | 10 Feb 25 13:15 UTC |
	|         | systemctl cat containerd                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p flannel-651187 sudo find                            | flannel-651187         | jenkins | v1.35.0 | 10 Feb 25 13:15 UTC | 10 Feb 25 13:15 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p bridge-651187 sudo cat                              | bridge-651187          | jenkins | v1.35.0 | 10 Feb 25 13:15 UTC | 10 Feb 25 13:15 UTC |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p flannel-651187 sudo crio                            | flannel-651187         | jenkins | v1.35.0 | 10 Feb 25 13:15 UTC | 10 Feb 25 13:15 UTC |
	|         | config                                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-651187 sudo cat                              | bridge-651187          | jenkins | v1.35.0 | 10 Feb 25 13:15 UTC | 10 Feb 25 13:15 UTC |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| delete  | -p flannel-651187                                      | flannel-651187         | jenkins | v1.35.0 | 10 Feb 25 13:15 UTC | 10 Feb 25 13:15 UTC |
	| ssh     | -p bridge-651187 sudo                                  | bridge-651187          | jenkins | v1.35.0 | 10 Feb 25 13:15 UTC | 10 Feb 25 13:15 UTC |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-651187 sudo                                  | bridge-651187          | jenkins | v1.35.0 | 10 Feb 25 13:15 UTC | 10 Feb 25 13:15 UTC |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| start   | -p no-preload-112306                                   | no-preload-112306      | jenkins | v1.35.0 | 10 Feb 25 13:15 UTC | 10 Feb 25 13:16 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-651187 sudo find                             | bridge-651187          | jenkins | v1.35.0 | 10 Feb 25 13:15 UTC | 10 Feb 25 13:15 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p bridge-651187 sudo crio                             | bridge-651187          | jenkins | v1.35.0 | 10 Feb 25 13:15 UTC | 10 Feb 25 13:15 UTC |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p bridge-651187                                       | bridge-651187          | jenkins | v1.35.0 | 10 Feb 25 13:15 UTC | 10 Feb 25 13:15 UTC |
	| start   | -p embed-certs-396582                                  | embed-certs-396582     | jenkins | v1.35.0 | 10 Feb 25 13:15 UTC | 10 Feb 25 13:17 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-112306             | no-preload-112306      | jenkins | v1.35.0 | 10 Feb 25 13:16 UTC | 10 Feb 25 13:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-112306                                   | no-preload-112306      | jenkins | v1.35.0 | 10 Feb 25 13:16 UTC | 10 Feb 25 13:18 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-396582            | embed-certs-396582     | jenkins | v1.35.0 | 10 Feb 25 13:17 UTC | 10 Feb 25 13:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-396582                                  | embed-certs-396582     | jenkins | v1.35.0 | 10 Feb 25 13:17 UTC | 10 Feb 25 13:18 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-112306                  | no-preload-112306      | jenkins | v1.35.0 | 10 Feb 25 13:18 UTC | 10 Feb 25 13:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-112306                                   | no-preload-112306      | jenkins | v1.35.0 | 10 Feb 25 13:18 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-745712        | old-k8s-version-745712 | jenkins | v1.35.0 | 10 Feb 25 13:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-396582                 | embed-certs-396582     | jenkins | v1.35.0 | 10 Feb 25 13:18 UTC | 10 Feb 25 13:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-396582                                  | embed-certs-396582     | jenkins | v1.35.0 | 10 Feb 25 13:18 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 13:18:52
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 13:18:52.830804  687731 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:18:52.830910  687731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:18:52.830917  687731 out.go:358] Setting ErrFile to fd 2...
	I0210 13:18:52.830923  687731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:18:52.831092  687731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
	I0210 13:18:52.831697  687731 out.go:352] Setting JSON to false
	I0210 13:18:52.832666  687731 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":18083,"bootTime":1739175450,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 13:18:52.832769  687731 start.go:139] virtualization: kvm guest
	I0210 13:18:52.835100  687731 out.go:177] * [embed-certs-396582] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 13:18:52.836417  687731 out.go:177]   - MINIKUBE_LOCATION=20383
	I0210 13:18:52.836428  687731 notify.go:220] Checking for updates...
	I0210 13:18:52.838856  687731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 13:18:52.840256  687731 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:18:52.841695  687731 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	I0210 13:18:52.842872  687731 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 13:18:52.843969  687731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 13:18:52.845707  687731 config.go:182] Loaded profile config "embed-certs-396582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:18:52.846390  687731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:18:52.846458  687731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:18:52.862477  687731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37323
	I0210 13:18:52.862953  687731 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:18:52.863602  687731 main.go:141] libmachine: Using API Version  1
	I0210 13:18:52.863631  687731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:18:52.863997  687731 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:18:52.864205  687731 main.go:141] libmachine: (embed-certs-396582) Calling .DriverName
	I0210 13:18:52.864462  687731 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 13:18:52.864807  687731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:18:52.864857  687731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:18:52.880825  687731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41189
	I0210 13:18:52.881320  687731 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:18:52.881932  687731 main.go:141] libmachine: Using API Version  1
	I0210 13:18:52.881964  687731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:18:52.882372  687731 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:18:52.882633  687731 main.go:141] libmachine: (embed-certs-396582) Calling .DriverName
	I0210 13:18:52.930571  687731 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 13:18:52.931840  687731 start.go:297] selected driver: kvm2
	I0210 13:18:52.931873  687731 start.go:901] validating driver "kvm2" against &{Name:embed-certs-396582 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-396582 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:18:52.932093  687731 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 13:18:52.933165  687731 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:18:52.933275  687731 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20383-625153/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 13:18:52.957246  687731 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 13:18:52.957864  687731 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 13:18:52.957912  687731 cni.go:84] Creating CNI manager for ""
	I0210 13:18:52.957977  687731 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:18:52.958055  687731 start.go:340] cluster config:
	{Name:embed-certs-396582 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-396582 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:18:52.958223  687731 iso.go:125] acquiring lock: {Name:mk013d189757e85c0699014f5ef29205d9d4927f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:18:52.960146  687731 out.go:177] * Starting "embed-certs-396582" primary control-plane node in "embed-certs-396582" cluster
	I0210 13:18:52.961441  687731 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 13:18:52.961508  687731 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0210 13:18:52.961523  687731 cache.go:56] Caching tarball of preloaded images
	I0210 13:18:52.961646  687731 preload.go:172] Found /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 13:18:52.961662  687731 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0210 13:18:52.961812  687731 profile.go:143] Saving config to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/embed-certs-396582/config.json ...
	I0210 13:18:52.962087  687731 start.go:360] acquireMachinesLock for embed-certs-396582: {Name:mk28e87da66de739a4c7c70d1fb5afc4ce31a4d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 13:18:52.962157  687731 start.go:364] duration metric: took 45.165µs to acquireMachinesLock for "embed-certs-396582"
	I0210 13:18:52.962177  687731 start.go:96] Skipping create...Using existing machine configuration
	I0210 13:18:52.962184  687731 fix.go:54] fixHost starting: 
	I0210 13:18:52.962536  687731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:18:52.962570  687731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:18:52.984038  687731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36653
	I0210 13:18:52.984508  687731 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:18:52.985250  687731 main.go:141] libmachine: Using API Version  1
	I0210 13:18:52.985286  687731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:18:52.985710  687731 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:18:52.985912  687731 main.go:141] libmachine: (embed-certs-396582) Calling .DriverName
	I0210 13:18:52.986062  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetState
	I0210 13:18:52.988014  687731 fix.go:112] recreateIfNeeded on embed-certs-396582: state=Stopped err=<nil>
	I0210 13:18:52.988043  687731 main.go:141] libmachine: (embed-certs-396582) Calling .DriverName
	W0210 13:18:52.988188  687731 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 13:18:52.990191  687731 out.go:177] * Restarting existing kvm2 VM for "embed-certs-396582" ...
	I0210 13:18:51.300467  687246 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.264939764s)
	I0210 13:18:51.300516  687246 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0210 13:18:51.300469  687246 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.1: (2.264959804s)
	I0210 13:18:51.300529  687246 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 from cache
	I0210 13:18:51.300549  687246 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0210 13:18:51.300593  687246 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0210 13:18:53.365691  687246 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.065071021s)
	I0210 13:18:53.365734  687246 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0210 13:18:53.365765  687246 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0210 13:18:53.365819  687246 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0210 13:18:54.007661  687246 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0210 13:18:54.007726  687246 cache_images.go:123] Successfully loaded all cached images
	I0210 13:18:54.007735  687246 cache_images.go:92] duration metric: took 15.226453837s to LoadCachedImages
	I0210 13:18:54.007754  687246 kubeadm.go:934] updating node { 192.168.39.183 8443 v1.32.1 crio true true} ...
	I0210 13:18:54.007932  687246 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-112306 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:no-preload-112306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 13:18:54.008038  687246 ssh_runner.go:195] Run: crio config
	I0210 13:18:54.053004  687246 cni.go:84] Creating CNI manager for ""
	I0210 13:18:54.053033  687246 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:18:54.053046  687246 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 13:18:54.053069  687246 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-112306 NodeName:no-preload-112306 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 13:18:54.053267  687246 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-112306"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.183"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 13:18:54.053347  687246 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 13:18:54.063435  687246 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 13:18:54.063506  687246 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 13:18:54.072988  687246 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0210 13:18:54.090915  687246 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 13:18:54.108214  687246 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I0210 13:18:54.126944  687246 ssh_runner.go:195] Run: grep 192.168.39.183	control-plane.minikube.internal$ /etc/hosts
	I0210 13:18:54.131104  687246 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:18:54.143429  687246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:18:54.283560  687246 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:18:54.302531  687246 certs.go:68] Setting up /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/no-preload-112306 for IP: 192.168.39.183
	I0210 13:18:54.302562  687246 certs.go:194] generating shared ca certs ...
	I0210 13:18:54.302594  687246 certs.go:226] acquiring lock for ca certs: {Name:mkf2f72f82a6bd7e3c16bb224cd26b80c3c89e0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:18:54.302854  687246 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key
	I0210 13:18:54.302926  687246 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key
	I0210 13:18:54.302946  687246 certs.go:256] generating profile certs ...
	I0210 13:18:54.303086  687246 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/no-preload-112306/client.key
	I0210 13:18:54.303196  687246 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/no-preload-112306/apiserver.key.14666fb6
	I0210 13:18:54.303267  687246 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/no-preload-112306/proxy-client.key
	I0210 13:18:54.303437  687246 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352.pem (1338 bytes)
	W0210 13:18:54.303484  687246 certs.go:480] ignoring /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352_empty.pem, impossibly tiny 0 bytes
	I0210 13:18:54.303502  687246 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem (1675 bytes)
	I0210 13:18:54.303547  687246 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem (1082 bytes)
	I0210 13:18:54.303588  687246 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem (1123 bytes)
	I0210 13:18:54.303631  687246 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem (1675 bytes)
	I0210 13:18:54.303703  687246 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem (1708 bytes)
	I0210 13:18:54.304808  687246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 13:18:54.357011  687246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 13:18:54.386244  687246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 13:18:54.420210  687246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 13:18:54.461708  687246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/no-preload-112306/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0210 13:18:54.493791  687246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/no-preload-112306/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0210 13:18:54.525718  687246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/no-preload-112306/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 13:18:54.551372  687246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/no-preload-112306/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 13:18:54.576326  687246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352.pem --> /usr/share/ca-certificates/632352.pem (1338 bytes)
	I0210 13:18:54.602359  687246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem --> /usr/share/ca-certificates/6323522.pem (1708 bytes)
	I0210 13:18:54.627380  687246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 13:18:54.653004  687246 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 13:18:54.673646  687246 ssh_runner.go:195] Run: openssl version
	I0210 13:18:54.682259  687246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6323522.pem && ln -fs /usr/share/ca-certificates/6323522.pem /etc/ssl/certs/6323522.pem"
	I0210 13:18:54.697772  687246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6323522.pem
	I0210 13:18:54.703444  687246 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 12:13 /usr/share/ca-certificates/6323522.pem
	I0210 13:18:54.703542  687246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6323522.pem
	I0210 13:18:54.710081  687246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6323522.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 13:18:54.724204  687246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 13:18:54.736942  687246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:18:54.742174  687246 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:18:54.742249  687246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:18:54.748209  687246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 13:18:54.760795  687246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/632352.pem && ln -fs /usr/share/ca-certificates/632352.pem /etc/ssl/certs/632352.pem"
	I0210 13:18:54.773161  687246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/632352.pem
	I0210 13:18:54.778068  687246 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 12:13 /usr/share/ca-certificates/632352.pem
	I0210 13:18:54.778142  687246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/632352.pem
	I0210 13:18:54.784503  687246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/632352.pem /etc/ssl/certs/51391683.0"
	I0210 13:18:54.795590  687246 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 13:18:54.800637  687246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 13:18:54.806879  687246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 13:18:54.813058  687246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 13:18:54.819124  687246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 13:18:54.825597  687246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 13:18:54.831979  687246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 13:18:54.838323  687246 kubeadm.go:392] StartCluster: {Name:no-preload-112306 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-112306 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:18:54.838475  687246 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 13:18:54.838550  687246 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:18:54.877717  687246 cri.go:89] found id: ""
	I0210 13:18:54.877814  687246 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 13:18:54.887880  687246 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0210 13:18:54.887906  687246 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0210 13:18:54.887978  687246 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0210 13:18:54.897450  687246 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0210 13:18:54.898322  687246 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-112306" does not appear in /home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:18:54.898737  687246 kubeconfig.go:62] /home/jenkins/minikube-integration/20383-625153/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-112306" cluster setting kubeconfig missing "no-preload-112306" context setting]
	I0210 13:18:54.899444  687246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/kubeconfig: {Name:mke7ef1ff4ff1259856291979fdd0337df3a08b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:18:54.901283  687246 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0210 13:18:54.911722  687246 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.183
	I0210 13:18:54.911768  687246 kubeadm.go:1160] stopping kube-system containers ...
	I0210 13:18:54.911788  687246 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0210 13:18:54.911855  687246 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:18:54.949086  687246 cri.go:89] found id: ""
	I0210 13:18:54.949220  687246 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0210 13:18:54.971368  687246 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:18:54.981585  687246 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:18:54.981606  687246 kubeadm.go:157] found existing configuration files:
	
	I0210 13:18:54.981655  687246 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:18:54.990797  687246 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:18:54.990857  687246 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:18:55.000404  687246 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:18:55.009456  687246 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:18:55.009533  687246 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:18:55.019104  687246 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:18:55.028888  687246 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:18:55.028976  687246 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:18:55.039323  687246 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:18:55.049519  687246 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:18:55.049602  687246 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:18:55.059115  687246 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 13:18:55.069167  687246 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:18:55.179668  687246 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:18:56.100252  687246 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:18:51.844149  679972 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8443/healthz ...
	I0210 13:18:51.844815  679972 api_server.go:269] stopped: https://192.168.50.25:8443/healthz: Get "https://192.168.50.25:8443/healthz": dial tcp 192.168.50.25:8443: connect: connection refused
	I0210 13:18:51.844902  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:18:51.844970  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:18:51.883262  679972 cri.go:89] found id: "cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:18:51.883291  679972 cri.go:89] found id: ""
	I0210 13:18:51.883301  679972 logs.go:282] 1 containers: [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99]
	I0210 13:18:51.883370  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:51.888231  679972 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:18:51.888304  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:18:51.931806  679972 cri.go:89] found id: "20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:18:51.931832  679972 cri.go:89] found id: ""
	I0210 13:18:51.931842  679972 logs.go:282] 1 containers: [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2]
	I0210 13:18:51.931900  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:51.936823  679972 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:18:51.936905  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:18:51.971812  679972 cri.go:89] found id: "598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:18:51.971834  679972 cri.go:89] found id: "de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:18:51.971838  679972 cri.go:89] found id: ""
	I0210 13:18:51.971845  679972 logs.go:282] 2 containers: [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f]
	I0210 13:18:51.971901  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:51.975886  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:51.979680  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:18:51.979744  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:18:52.014888  679972 cri.go:89] found id: "4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:18:52.014913  679972 cri.go:89] found id: ""
	I0210 13:18:52.014924  679972 logs.go:282] 1 containers: [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae]
	I0210 13:18:52.014983  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:52.019740  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:18:52.019819  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:18:52.061852  679972 cri.go:89] found id: "489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:18:52.061882  679972 cri.go:89] found id: ""
	I0210 13:18:52.061893  679972 logs.go:282] 1 containers: [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0]
	I0210 13:18:52.061957  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:52.066225  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:18:52.066291  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:18:52.107812  679972 cri.go:89] found id: "c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:18:52.107841  679972 cri.go:89] found id: ""
	I0210 13:18:52.107853  679972 logs.go:282] 1 containers: [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79]
	I0210 13:18:52.107911  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:52.112310  679972 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:18:52.112366  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:18:52.153227  679972 cri.go:89] found id: ""
	I0210 13:18:52.153262  679972 logs.go:282] 0 containers: []
	W0210 13:18:52.153273  679972 logs.go:284] No container was found matching "kindnet"
	I0210 13:18:52.153282  679972 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0210 13:18:52.153336  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 13:18:52.188227  679972 cri.go:89] found id: "27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:18:52.188259  679972 cri.go:89] found id: ""
	I0210 13:18:52.188271  679972 logs.go:282] 1 containers: [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b]
	I0210 13:18:52.188333  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:52.192146  679972 logs.go:123] Gathering logs for kubelet ...
	I0210 13:18:52.192173  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:18:52.308889  679972 logs.go:123] Gathering logs for etcd [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2] ...
	I0210 13:18:52.308937  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:18:52.360722  679972 logs.go:123] Gathering logs for storage-provisioner [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b] ...
	I0210 13:18:52.360761  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:18:52.396355  679972 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:18:52.396392  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:18:52.730840  679972 logs.go:123] Gathering logs for container status ...
	I0210 13:18:52.730874  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:18:52.782279  679972 logs.go:123] Gathering logs for dmesg ...
	I0210 13:18:52.782315  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:18:52.799224  679972 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:18:52.799263  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:18:52.876774  679972 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:18:52.876802  679972 logs.go:123] Gathering logs for kube-apiserver [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99] ...
	I0210 13:18:52.876816  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:18:52.933864  679972 logs.go:123] Gathering logs for coredns [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d] ...
	I0210 13:18:52.933899  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:18:52.980721  679972 logs.go:123] Gathering logs for coredns [de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f] ...
	I0210 13:18:52.980761  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:18:53.023950  679972 logs.go:123] Gathering logs for kube-scheduler [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae] ...
	I0210 13:18:53.024008  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:18:53.069669  679972 logs.go:123] Gathering logs for kube-proxy [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0] ...
	I0210 13:18:53.069704  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:18:53.104102  679972 logs.go:123] Gathering logs for kube-controller-manager [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79] ...
	I0210 13:18:53.104137  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:18:55.636967  679972 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8443/healthz ...
	I0210 13:18:55.637722  679972 api_server.go:269] stopped: https://192.168.50.25:8443/healthz: Get "https://192.168.50.25:8443/healthz": dial tcp 192.168.50.25:8443: connect: connection refused
	I0210 13:18:55.637780  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:18:55.637829  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:18:55.677517  679972 cri.go:89] found id: "cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:18:55.677550  679972 cri.go:89] found id: ""
	I0210 13:18:55.677558  679972 logs.go:282] 1 containers: [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99]
	I0210 13:18:55.677620  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:55.681696  679972 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:18:55.681761  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:18:55.720425  679972 cri.go:89] found id: "20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:18:55.720455  679972 cri.go:89] found id: ""
	I0210 13:18:55.720466  679972 logs.go:282] 1 containers: [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2]
	I0210 13:18:55.720540  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:55.725082  679972 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:18:55.725172  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:18:55.763477  679972 cri.go:89] found id: "598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:18:55.763503  679972 cri.go:89] found id: "de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:18:55.763508  679972 cri.go:89] found id: ""
	I0210 13:18:55.763517  679972 logs.go:282] 2 containers: [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f]
	I0210 13:18:55.763583  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:55.767519  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:55.771881  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:18:55.771960  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:18:55.813837  679972 cri.go:89] found id: "4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:18:55.813864  679972 cri.go:89] found id: ""
	I0210 13:18:55.813874  679972 logs.go:282] 1 containers: [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae]
	I0210 13:18:55.813935  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:55.817952  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:18:55.818024  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:18:55.857759  679972 cri.go:89] found id: "489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:18:55.857784  679972 cri.go:89] found id: ""
	I0210 13:18:55.857794  679972 logs.go:282] 1 containers: [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0]
	I0210 13:18:55.857858  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:55.862216  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:18:55.862288  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:18:55.898905  679972 cri.go:89] found id: "c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:18:55.898931  679972 cri.go:89] found id: ""
	I0210 13:18:55.898941  679972 logs.go:282] 1 containers: [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79]
	I0210 13:18:55.899003  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:55.902926  679972 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:18:55.902993  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:18:55.936202  679972 cri.go:89] found id: ""
	I0210 13:18:55.936232  679972 logs.go:282] 0 containers: []
	W0210 13:18:55.936245  679972 logs.go:284] No container was found matching "kindnet"
	I0210 13:18:55.936252  679972 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0210 13:18:55.936327  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 13:18:55.974278  679972 cri.go:89] found id: "27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:18:55.974299  679972 cri.go:89] found id: ""
	I0210 13:18:55.974308  679972 logs.go:282] 1 containers: [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b]
	I0210 13:18:55.974378  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:55.978611  679972 logs.go:123] Gathering logs for storage-provisioner [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b] ...
	I0210 13:18:55.978637  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:18:56.013260  679972 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:18:56.013299  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:18:56.321642  679972 logs.go:123] Gathering logs for kubelet ...
	I0210 13:18:56.321682  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:18:56.431986  679972 logs.go:123] Gathering logs for dmesg ...
	I0210 13:18:56.432029  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:18:56.448350  679972 logs.go:123] Gathering logs for etcd [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2] ...
	I0210 13:18:56.448385  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:18:56.487646  679972 logs.go:123] Gathering logs for coredns [de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f] ...
	I0210 13:18:56.487687  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:18:56.521408  679972 logs.go:123] Gathering logs for kube-scheduler [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae] ...
	I0210 13:18:56.521444  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:18:56.577640  679972 logs.go:123] Gathering logs for container status ...
	I0210 13:18:56.577692  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:18:56.625974  679972 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:18:56.626012  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 13:18:52.991500  687731 main.go:141] libmachine: (embed-certs-396582) Calling .Start
	I0210 13:18:52.991806  687731 main.go:141] libmachine: (embed-certs-396582) starting domain...
	I0210 13:18:52.991842  687731 main.go:141] libmachine: (embed-certs-396582) ensuring networks are active...
	I0210 13:18:52.993433  687731 main.go:141] libmachine: (embed-certs-396582) Ensuring network default is active
	I0210 13:18:52.993816  687731 main.go:141] libmachine: (embed-certs-396582) Ensuring network mk-embed-certs-396582 is active
	I0210 13:18:52.994236  687731 main.go:141] libmachine: (embed-certs-396582) getting domain XML...
	I0210 13:18:52.995070  687731 main.go:141] libmachine: (embed-certs-396582) creating domain...
	I0210 13:18:54.372640  687731 main.go:141] libmachine: (embed-certs-396582) waiting for IP...
	I0210 13:18:54.373959  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:18:54.374514  687731 main.go:141] libmachine: (embed-certs-396582) DBG | unable to find current IP address of domain embed-certs-396582 in network mk-embed-certs-396582
	I0210 13:18:54.374604  687731 main.go:141] libmachine: (embed-certs-396582) DBG | I0210 13:18:54.374496  687765 retry.go:31] will retry after 222.800359ms: waiting for domain to come up
	I0210 13:18:54.599342  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:18:54.599969  687731 main.go:141] libmachine: (embed-certs-396582) DBG | unable to find current IP address of domain embed-certs-396582 in network mk-embed-certs-396582
	I0210 13:18:54.599996  687731 main.go:141] libmachine: (embed-certs-396582) DBG | I0210 13:18:54.599932  687765 retry.go:31] will retry after 238.837122ms: waiting for domain to come up
	I0210 13:18:54.840682  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:18:54.841433  687731 main.go:141] libmachine: (embed-certs-396582) DBG | unable to find current IP address of domain embed-certs-396582 in network mk-embed-certs-396582
	I0210 13:18:54.841477  687731 main.go:141] libmachine: (embed-certs-396582) DBG | I0210 13:18:54.841391  687765 retry.go:31] will retry after 403.481576ms: waiting for domain to come up
	I0210 13:18:55.246843  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:18:55.247515  687731 main.go:141] libmachine: (embed-certs-396582) DBG | unable to find current IP address of domain embed-certs-396582 in network mk-embed-certs-396582
	I0210 13:18:55.247552  687731 main.go:141] libmachine: (embed-certs-396582) DBG | I0210 13:18:55.247462  687765 retry.go:31] will retry after 394.655356ms: waiting for domain to come up
	I0210 13:18:55.644257  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:18:55.644912  687731 main.go:141] libmachine: (embed-certs-396582) DBG | unable to find current IP address of domain embed-certs-396582 in network mk-embed-certs-396582
	I0210 13:18:55.644938  687731 main.go:141] libmachine: (embed-certs-396582) DBG | I0210 13:18:55.644884  687765 retry.go:31] will retry after 638.529913ms: waiting for domain to come up
	I0210 13:18:56.284732  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:18:56.285355  687731 main.go:141] libmachine: (embed-certs-396582) DBG | unable to find current IP address of domain embed-certs-396582 in network mk-embed-certs-396582
	I0210 13:18:56.285387  687731 main.go:141] libmachine: (embed-certs-396582) DBG | I0210 13:18:56.285330  687765 retry.go:31] will retry after 615.768501ms: waiting for domain to come up
	I0210 13:18:56.903145  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:18:56.903619  687731 main.go:141] libmachine: (embed-certs-396582) DBG | unable to find current IP address of domain embed-certs-396582 in network mk-embed-certs-396582
	I0210 13:18:56.903652  687731 main.go:141] libmachine: (embed-certs-396582) DBG | I0210 13:18:56.903569  687765 retry.go:31] will retry after 835.415401ms: waiting for domain to come up
	I0210 13:18:57.740684  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:18:57.741181  687731 main.go:141] libmachine: (embed-certs-396582) DBG | unable to find current IP address of domain embed-certs-396582 in network mk-embed-certs-396582
	I0210 13:18:57.741274  687731 main.go:141] libmachine: (embed-certs-396582) DBG | I0210 13:18:57.741181  687765 retry.go:31] will retry after 1.233042735s: waiting for domain to come up
	I0210 13:18:56.324844  687246 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:18:56.421268  687246 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:18:56.583327  687246 api_server.go:52] waiting for apiserver process to appear ...
	I0210 13:18:56.583481  687246 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:18:57.083621  687246 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:18:57.583598  687246 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:18:57.597782  687246 api_server.go:72] duration metric: took 1.014452129s to wait for apiserver process to appear ...
	I0210 13:18:57.597816  687246 api_server.go:88] waiting for apiserver healthz status ...
	I0210 13:18:57.597842  687246 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0210 13:19:00.044957  687246 api_server.go:279] https://192.168.39.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 13:19:00.044994  687246 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 13:19:00.045014  687246 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0210 13:19:00.108448  687246 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 13:19:00.108488  687246 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 13:19:00.108513  687246 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0210 13:19:00.116786  687246 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 13:19:00.116820  687246 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 13:19:00.598287  687246 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0210 13:19:00.607019  687246 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 13:19:00.607123  687246 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 13:19:01.098856  687246 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0210 13:19:01.106400  687246 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 13:19:01.106442  687246 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 13:19:01.598741  687246 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0210 13:19:01.603225  687246 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I0210 13:19:01.609991  687246 api_server.go:141] control plane version: v1.32.1
	I0210 13:19:01.610041  687246 api_server.go:131] duration metric: took 4.012215157s to wait for apiserver health ...
	I0210 13:19:01.610060  687246 cni.go:84] Creating CNI manager for ""
	I0210 13:19:01.610072  687246 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:19:01.611727  687246 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	W0210 13:18:56.698703  679972 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:18:56.698736  679972 logs.go:123] Gathering logs for kube-apiserver [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99] ...
	I0210 13:18:56.698754  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:18:56.742386  679972 logs.go:123] Gathering logs for coredns [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d] ...
	I0210 13:18:56.742419  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:18:56.782697  679972 logs.go:123] Gathering logs for kube-proxy [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0] ...
	I0210 13:18:56.782734  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:18:56.820610  679972 logs.go:123] Gathering logs for kube-controller-manager [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79] ...
	I0210 13:18:56.820645  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:18:59.356481  679972 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8443/healthz ...
	I0210 13:18:59.357219  679972 api_server.go:269] stopped: https://192.168.50.25:8443/healthz: Get "https://192.168.50.25:8443/healthz": dial tcp 192.168.50.25:8443: connect: connection refused
	I0210 13:18:59.357292  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:18:59.357370  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:18:59.392683  679972 cri.go:89] found id: "cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:18:59.392715  679972 cri.go:89] found id: ""
	I0210 13:18:59.392727  679972 logs.go:282] 1 containers: [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99]
	I0210 13:18:59.392798  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:59.398361  679972 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:18:59.398457  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:18:59.436416  679972 cri.go:89] found id: "20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:18:59.436447  679972 cri.go:89] found id: ""
	I0210 13:18:59.436458  679972 logs.go:282] 1 containers: [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2]
	I0210 13:18:59.436528  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:59.440803  679972 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:18:59.440889  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:18:59.484930  679972 cri.go:89] found id: "598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:18:59.484961  679972 cri.go:89] found id: "de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:18:59.484967  679972 cri.go:89] found id: ""
	I0210 13:18:59.484977  679972 logs.go:282] 2 containers: [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f]
	I0210 13:18:59.485051  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:59.489705  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:59.493632  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:18:59.493701  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:18:59.532523  679972 cri.go:89] found id: "4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:18:59.532555  679972 cri.go:89] found id: ""
	I0210 13:18:59.532566  679972 logs.go:282] 1 containers: [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae]
	I0210 13:18:59.532636  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:59.536525  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:18:59.536600  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:18:59.574765  679972 cri.go:89] found id: "489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:18:59.574798  679972 cri.go:89] found id: ""
	I0210 13:18:59.574809  679972 logs.go:282] 1 containers: [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0]
	I0210 13:18:59.574876  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:59.580131  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:18:59.580236  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:18:59.614542  679972 cri.go:89] found id: "c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:18:59.614571  679972 cri.go:89] found id: ""
	I0210 13:18:59.614582  679972 logs.go:282] 1 containers: [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79]
	I0210 13:18:59.614653  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:59.619524  679972 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:18:59.619612  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:18:59.664492  679972 cri.go:89] found id: ""
	I0210 13:18:59.664523  679972 logs.go:282] 0 containers: []
	W0210 13:18:59.664536  679972 logs.go:284] No container was found matching "kindnet"
	I0210 13:18:59.664544  679972 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0210 13:18:59.664618  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 13:18:59.711702  679972 cri.go:89] found id: "27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:18:59.711734  679972 cri.go:89] found id: ""
	I0210 13:18:59.711747  679972 logs.go:282] 1 containers: [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b]
	I0210 13:18:59.711820  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:18:59.716211  679972 logs.go:123] Gathering logs for kubelet ...
	I0210 13:18:59.716241  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:18:59.822013  679972 logs.go:123] Gathering logs for kube-scheduler [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae] ...
	I0210 13:18:59.822052  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:18:59.882962  679972 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:18:59.883006  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:19:00.203418  679972 logs.go:123] Gathering logs for container status ...
	I0210 13:19:00.203461  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:19:00.259239  679972 logs.go:123] Gathering logs for coredns [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d] ...
	I0210 13:19:00.259283  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:19:00.293663  679972 logs.go:123] Gathering logs for coredns [de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f] ...
	I0210 13:19:00.293701  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:19:00.328517  679972 logs.go:123] Gathering logs for kube-proxy [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0] ...
	I0210 13:19:00.328559  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:19:00.362162  679972 logs.go:123] Gathering logs for kube-controller-manager [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79] ...
	I0210 13:19:00.362197  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:19:00.400312  679972 logs.go:123] Gathering logs for dmesg ...
	I0210 13:19:00.400366  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:19:00.413162  679972 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:19:00.413191  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:19:00.479164  679972 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:19:00.479200  679972 logs.go:123] Gathering logs for kube-apiserver [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99] ...
	I0210 13:19:00.479257  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:19:00.521252  679972 logs.go:123] Gathering logs for etcd [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2] ...
	I0210 13:19:00.521304  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:19:00.569235  679972 logs.go:123] Gathering logs for storage-provisioner [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b] ...
	I0210 13:19:00.569276  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:18:58.975368  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:18:58.975900  687731 main.go:141] libmachine: (embed-certs-396582) DBG | unable to find current IP address of domain embed-certs-396582 in network mk-embed-certs-396582
	I0210 13:18:58.975930  687731 main.go:141] libmachine: (embed-certs-396582) DBG | I0210 13:18:58.975867  687765 retry.go:31] will retry after 1.604058774s: waiting for domain to come up
	I0210 13:19:00.582285  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:00.582866  687731 main.go:141] libmachine: (embed-certs-396582) DBG | unable to find current IP address of domain embed-certs-396582 in network mk-embed-certs-396582
	I0210 13:19:00.582898  687731 main.go:141] libmachine: (embed-certs-396582) DBG | I0210 13:19:00.582824  687765 retry.go:31] will retry after 1.510910649s: waiting for domain to come up
	I0210 13:19:02.095526  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:02.096172  687731 main.go:141] libmachine: (embed-certs-396582) DBG | unable to find current IP address of domain embed-certs-396582 in network mk-embed-certs-396582
	I0210 13:19:02.096207  687731 main.go:141] libmachine: (embed-certs-396582) DBG | I0210 13:19:02.096121  687765 retry.go:31] will retry after 2.266179175s: waiting for domain to come up
	I0210 13:19:01.612926  687246 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0210 13:19:01.623781  687246 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0210 13:19:01.642821  687246 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 13:19:01.646354  687246 system_pods.go:59] 8 kube-system pods found
	I0210 13:19:01.646402  687246 system_pods.go:61] "coredns-668d6bf9bc-4h69k" [d57a34d8-4565-4547-bc92-a11b31950700] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0210 13:19:01.646410  687246 system_pods.go:61] "etcd-no-preload-112306" [fd556014-615e-47b2-99e6-e5a1d2a65276] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0210 13:19:01.646421  687246 system_pods.go:61] "kube-apiserver-no-preload-112306" [f45aec30-29ad-44d7-bd6c-66610b2d8992] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0210 13:19:01.646429  687246 system_pods.go:61] "kube-controller-manager-no-preload-112306" [03f210b0-ce5c-4a48-bef5-9bcee3a3a42f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0210 13:19:01.646435  687246 system_pods.go:61] "kube-proxy-l2wxd" [f1e4a707-3cbb-4f21-a84d-eefd09a40a63] Running
	I0210 13:19:01.646442  687246 system_pods.go:61] "kube-scheduler-no-preload-112306" [8e387867-9516-43ad-a1e0-e40fa781c8f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0210 13:19:01.646453  687246 system_pods.go:61] "metrics-server-f79f97bbb-r9f86" [db994880-48c8-4f86-9fe8-f6c59ab50022] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 13:19:01.646458  687246 system_pods.go:61] "storage-provisioner" [a8caeeb3-3041-4986-b66e-eda54045b416] Running
	I0210 13:19:01.646477  687246 system_pods.go:74] duration metric: took 3.619545ms to wait for pod list to return data ...
	I0210 13:19:01.646487  687246 node_conditions.go:102] verifying NodePressure condition ...
	I0210 13:19:01.651683  687246 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 13:19:01.651724  687246 node_conditions.go:123] node cpu capacity is 2
	I0210 13:19:01.651744  687246 node_conditions.go:105] duration metric: took 5.250262ms to run NodePressure ...
	I0210 13:19:01.651774  687246 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:19:01.920312  687246 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0210 13:19:01.924159  687246 kubeadm.go:739] kubelet initialised
	I0210 13:19:01.924177  687246 kubeadm.go:740] duration metric: took 3.841312ms waiting for restarted kubelet to initialise ...
	I0210 13:19:01.924186  687246 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 13:19:01.927345  687246 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-4h69k" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:01.932274  687246 pod_ready.go:98] node "no-preload-112306" hosting pod "coredns-668d6bf9bc-4h69k" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-112306" has status "Ready":"False"
	I0210 13:19:01.932298  687246 pod_ready.go:82] duration metric: took 4.925203ms for pod "coredns-668d6bf9bc-4h69k" in "kube-system" namespace to be "Ready" ...
	E0210 13:19:01.932309  687246 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-112306" hosting pod "coredns-668d6bf9bc-4h69k" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-112306" has status "Ready":"False"
	I0210 13:19:01.932318  687246 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-112306" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:01.936284  687246 pod_ready.go:98] node "no-preload-112306" hosting pod "etcd-no-preload-112306" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-112306" has status "Ready":"False"
	I0210 13:19:01.936301  687246 pod_ready.go:82] duration metric: took 3.97432ms for pod "etcd-no-preload-112306" in "kube-system" namespace to be "Ready" ...
	E0210 13:19:01.936309  687246 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-112306" hosting pod "etcd-no-preload-112306" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-112306" has status "Ready":"False"
	I0210 13:19:01.936314  687246 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-112306" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:01.940978  687246 pod_ready.go:98] node "no-preload-112306" hosting pod "kube-apiserver-no-preload-112306" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-112306" has status "Ready":"False"
	I0210 13:19:01.940992  687246 pod_ready.go:82] duration metric: took 4.672611ms for pod "kube-apiserver-no-preload-112306" in "kube-system" namespace to be "Ready" ...
	E0210 13:19:01.941000  687246 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-112306" hosting pod "kube-apiserver-no-preload-112306" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-112306" has status "Ready":"False"
	I0210 13:19:01.941006  687246 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-112306" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:02.045822  687246 pod_ready.go:98] node "no-preload-112306" hosting pod "kube-controller-manager-no-preload-112306" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-112306" has status "Ready":"False"
	I0210 13:19:02.045867  687246 pod_ready.go:82] duration metric: took 104.851082ms for pod "kube-controller-manager-no-preload-112306" in "kube-system" namespace to be "Ready" ...
	E0210 13:19:02.045883  687246 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-112306" hosting pod "kube-controller-manager-no-preload-112306" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-112306" has status "Ready":"False"
	I0210 13:19:02.045892  687246 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-l2wxd" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:02.446412  687246 pod_ready.go:98] node "no-preload-112306" hosting pod "kube-proxy-l2wxd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-112306" has status "Ready":"False"
	I0210 13:19:02.446458  687246 pod_ready.go:82] duration metric: took 400.550894ms for pod "kube-proxy-l2wxd" in "kube-system" namespace to be "Ready" ...
	E0210 13:19:02.446472  687246 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-112306" hosting pod "kube-proxy-l2wxd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-112306" has status "Ready":"False"
	I0210 13:19:02.446481  687246 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-112306" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:02.846804  687246 pod_ready.go:98] node "no-preload-112306" hosting pod "kube-scheduler-no-preload-112306" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-112306" has status "Ready":"False"
	I0210 13:19:02.846846  687246 pod_ready.go:82] duration metric: took 400.356408ms for pod "kube-scheduler-no-preload-112306" in "kube-system" namespace to be "Ready" ...
	E0210 13:19:02.846862  687246 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-112306" hosting pod "kube-scheduler-no-preload-112306" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-112306" has status "Ready":"False"
	I0210 13:19:02.846872  687246 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-r9f86" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:03.257320  687246 pod_ready.go:98] node "no-preload-112306" hosting pod "metrics-server-f79f97bbb-r9f86" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-112306" has status "Ready":"False"
	I0210 13:19:03.257362  687246 pod_ready.go:82] duration metric: took 410.478122ms for pod "metrics-server-f79f97bbb-r9f86" in "kube-system" namespace to be "Ready" ...
	E0210 13:19:03.257387  687246 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-112306" hosting pod "metrics-server-f79f97bbb-r9f86" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-112306" has status "Ready":"False"
	I0210 13:19:03.257402  687246 pod_ready.go:39] duration metric: took 1.333206132s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 13:19:03.257429  687246 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 13:19:03.270492  687246 ops.go:34] apiserver oom_adj: -16
	I0210 13:19:03.270527  687246 kubeadm.go:597] duration metric: took 8.382610228s to restartPrimaryControlPlane
	I0210 13:19:03.270541  687246 kubeadm.go:394] duration metric: took 8.432236082s to StartCluster
	I0210 13:19:03.270576  687246 settings.go:142] acquiring lock: {Name:mk4bd8331d641665e48ff1d1c4382f2e915609be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:19:03.270674  687246 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:19:03.271807  687246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/kubeconfig: {Name:mke7ef1ff4ff1259856291979fdd0337df3a08b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:19:03.272090  687246 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 13:19:03.272348  687246 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 13:19:03.272486  687246 config.go:182] Loaded profile config "no-preload-112306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:19:03.272502  687246 addons.go:69] Setting storage-provisioner=true in profile "no-preload-112306"
	I0210 13:19:03.272523  687246 addons.go:238] Setting addon storage-provisioner=true in "no-preload-112306"
	W0210 13:19:03.272531  687246 addons.go:247] addon storage-provisioner should already be in state true
	I0210 13:19:03.272535  687246 addons.go:69] Setting default-storageclass=true in profile "no-preload-112306"
	I0210 13:19:03.272563  687246 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-112306"
	I0210 13:19:03.272581  687246 host.go:66] Checking if "no-preload-112306" exists ...
	I0210 13:19:03.272579  687246 addons.go:69] Setting dashboard=true in profile "no-preload-112306"
	I0210 13:19:03.272613  687246 addons.go:238] Setting addon dashboard=true in "no-preload-112306"
	I0210 13:19:03.272543  687246 addons.go:69] Setting metrics-server=true in profile "no-preload-112306"
	W0210 13:19:03.272627  687246 addons.go:247] addon dashboard should already be in state true
	I0210 13:19:03.272632  687246 addons.go:238] Setting addon metrics-server=true in "no-preload-112306"
	W0210 13:19:03.272640  687246 addons.go:247] addon metrics-server should already be in state true
	I0210 13:19:03.272663  687246 host.go:66] Checking if "no-preload-112306" exists ...
	I0210 13:19:03.272665  687246 host.go:66] Checking if "no-preload-112306" exists ...
	I0210 13:19:03.272985  687246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:03.273017  687246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:03.273031  687246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:03.273042  687246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:03.273068  687246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:03.273084  687246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:03.273124  687246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:03.273147  687246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:03.274778  687246 out.go:177] * Verifying Kubernetes components...
	I0210 13:19:03.276145  687246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:19:03.325864  687246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41675
	I0210 13:19:03.326150  687246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40907
	I0210 13:19:03.326249  687246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35439
	I0210 13:19:03.326697  687246 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:03.326824  687246 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:03.327352  687246 main.go:141] libmachine: Using API Version  1
	I0210 13:19:03.327373  687246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:03.327526  687246 main.go:141] libmachine: Using API Version  1
	I0210 13:19:03.327541  687246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:03.327611  687246 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:03.328177  687246 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:03.328264  687246 main.go:141] libmachine: Using API Version  1
	I0210 13:19:03.328281  687246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:03.328558  687246 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:03.328745  687246 main.go:141] libmachine: (no-preload-112306) Calling .GetState
	I0210 13:19:03.328801  687246 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:03.329216  687246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:03.329260  687246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:03.329975  687246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:03.330013  687246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:03.331669  687246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I0210 13:19:03.333420  687246 addons.go:238] Setting addon default-storageclass=true in "no-preload-112306"
	W0210 13:19:03.333443  687246 addons.go:247] addon default-storageclass should already be in state true
	I0210 13:19:03.333478  687246 host.go:66] Checking if "no-preload-112306" exists ...
	I0210 13:19:03.333830  687246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:03.333860  687246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:03.337574  687246 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:03.338075  687246 main.go:141] libmachine: Using API Version  1
	I0210 13:19:03.338099  687246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:03.338524  687246 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:03.339051  687246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:03.339084  687246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:03.352256  687246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42027
	I0210 13:19:03.352792  687246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37607
	I0210 13:19:03.352857  687246 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:03.353189  687246 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:03.353518  687246 main.go:141] libmachine: Using API Version  1
	I0210 13:19:03.353540  687246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:03.353679  687246 main.go:141] libmachine: Using API Version  1
	I0210 13:19:03.353694  687246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:03.353979  687246 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:03.354607  687246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:03.354651  687246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:03.354871  687246 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:03.355085  687246 main.go:141] libmachine: (no-preload-112306) Calling .GetState
	I0210 13:19:03.356891  687246 main.go:141] libmachine: (no-preload-112306) Calling .DriverName
	I0210 13:19:03.358232  687246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45687
	I0210 13:19:03.358535  687246 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0210 13:19:03.359376  687246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35465
	I0210 13:19:03.359736  687246 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:03.360119  687246 main.go:141] libmachine: Using API Version  1
	I0210 13:19:03.360146  687246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:03.360407  687246 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:03.360576  687246 main.go:141] libmachine: (no-preload-112306) Calling .GetState
	I0210 13:19:03.362083  687246 main.go:141] libmachine: (no-preload-112306) Calling .DriverName
	I0210 13:19:03.362188  687246 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:03.363595  687246 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:19:03.363618  687246 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0210 13:19:03.365122  687246 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0210 13:19:03.365144  687246 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0210 13:19:03.365170  687246 main.go:141] libmachine: (no-preload-112306) Calling .GetSSHHostname
	I0210 13:19:03.365216  687246 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 13:19:03.365232  687246 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 13:19:03.365250  687246 main.go:141] libmachine: (no-preload-112306) Calling .GetSSHHostname
	I0210 13:19:03.366654  687246 main.go:141] libmachine: Using API Version  1
	I0210 13:19:03.366679  687246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:03.367423  687246 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:03.367667  687246 main.go:141] libmachine: (no-preload-112306) Calling .GetState
	I0210 13:19:03.369089  687246 main.go:141] libmachine: (no-preload-112306) DBG | domain no-preload-112306 has defined MAC address 52:54:00:d9:00:6e in network mk-no-preload-112306
	I0210 13:19:03.369767  687246 main.go:141] libmachine: (no-preload-112306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:00:6e", ip: ""} in network mk-no-preload-112306: {Iface:virbr1 ExpiryTime:2025-02-10 14:18:27 +0000 UTC Type:0 Mac:52:54:00:d9:00:6e Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:no-preload-112306 Clientid:01:52:54:00:d9:00:6e}
	I0210 13:19:03.369803  687246 main.go:141] libmachine: (no-preload-112306) DBG | domain no-preload-112306 has defined IP address 192.168.39.183 and MAC address 52:54:00:d9:00:6e in network mk-no-preload-112306
	I0210 13:19:03.369970  687246 main.go:141] libmachine: (no-preload-112306) Calling .GetSSHPort
	I0210 13:19:03.370064  687246 main.go:141] libmachine: (no-preload-112306) DBG | domain no-preload-112306 has defined MAC address 52:54:00:d9:00:6e in network mk-no-preload-112306
	I0210 13:19:03.370156  687246 main.go:141] libmachine: (no-preload-112306) Calling .GetSSHKeyPath
	I0210 13:19:03.370278  687246 main.go:141] libmachine: (no-preload-112306) Calling .GetSSHUsername
	I0210 13:19:03.370403  687246 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/no-preload-112306/id_rsa Username:docker}
	I0210 13:19:03.370766  687246 main.go:141] libmachine: (no-preload-112306) Calling .GetSSHPort
	I0210 13:19:03.370791  687246 main.go:141] libmachine: (no-preload-112306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:00:6e", ip: ""} in network mk-no-preload-112306: {Iface:virbr1 ExpiryTime:2025-02-10 14:18:27 +0000 UTC Type:0 Mac:52:54:00:d9:00:6e Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:no-preload-112306 Clientid:01:52:54:00:d9:00:6e}
	I0210 13:19:03.370811  687246 main.go:141] libmachine: (no-preload-112306) DBG | domain no-preload-112306 has defined IP address 192.168.39.183 and MAC address 52:54:00:d9:00:6e in network mk-no-preload-112306
	I0210 13:19:03.371040  687246 main.go:141] libmachine: (no-preload-112306) Calling .GetSSHKeyPath
	I0210 13:19:03.371048  687246 main.go:141] libmachine: (no-preload-112306) Calling .DriverName
	I0210 13:19:03.371296  687246 main.go:141] libmachine: (no-preload-112306) Calling .GetSSHUsername
	I0210 13:19:03.371557  687246 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/no-preload-112306/id_rsa Username:docker}
	I0210 13:19:03.372985  687246 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0210 13:19:03.374172  687246 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0210 13:19:03.374197  687246 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0210 13:19:03.374228  687246 main.go:141] libmachine: (no-preload-112306) Calling .GetSSHHostname
	I0210 13:19:03.377808  687246 main.go:141] libmachine: (no-preload-112306) DBG | domain no-preload-112306 has defined MAC address 52:54:00:d9:00:6e in network mk-no-preload-112306
	I0210 13:19:03.378072  687246 main.go:141] libmachine: (no-preload-112306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:00:6e", ip: ""} in network mk-no-preload-112306: {Iface:virbr1 ExpiryTime:2025-02-10 14:18:27 +0000 UTC Type:0 Mac:52:54:00:d9:00:6e Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:no-preload-112306 Clientid:01:52:54:00:d9:00:6e}
	I0210 13:19:03.378101  687246 main.go:141] libmachine: (no-preload-112306) DBG | domain no-preload-112306 has defined IP address 192.168.39.183 and MAC address 52:54:00:d9:00:6e in network mk-no-preload-112306
	I0210 13:19:03.378393  687246 main.go:141] libmachine: (no-preload-112306) Calling .GetSSHPort
	I0210 13:19:03.378650  687246 main.go:141] libmachine: (no-preload-112306) Calling .GetSSHKeyPath
	I0210 13:19:03.378760  687246 main.go:141] libmachine: (no-preload-112306) Calling .GetSSHUsername
	I0210 13:19:03.378941  687246 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/no-preload-112306/id_rsa Username:docker}
	I0210 13:19:03.388268  687246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43715
	I0210 13:19:03.389066  687246 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:03.389763  687246 main.go:141] libmachine: Using API Version  1
	I0210 13:19:03.389790  687246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:03.390340  687246 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:03.390557  687246 main.go:141] libmachine: (no-preload-112306) Calling .GetState
	I0210 13:19:03.392360  687246 main.go:141] libmachine: (no-preload-112306) Calling .DriverName
	I0210 13:19:03.392574  687246 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 13:19:03.392592  687246 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 13:19:03.392613  687246 main.go:141] libmachine: (no-preload-112306) Calling .GetSSHHostname
	I0210 13:19:03.395973  687246 main.go:141] libmachine: (no-preload-112306) DBG | domain no-preload-112306 has defined MAC address 52:54:00:d9:00:6e in network mk-no-preload-112306
	I0210 13:19:03.396427  687246 main.go:141] libmachine: (no-preload-112306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:00:6e", ip: ""} in network mk-no-preload-112306: {Iface:virbr1 ExpiryTime:2025-02-10 14:18:27 +0000 UTC Type:0 Mac:52:54:00:d9:00:6e Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:no-preload-112306 Clientid:01:52:54:00:d9:00:6e}
	I0210 13:19:03.396449  687246 main.go:141] libmachine: (no-preload-112306) DBG | domain no-preload-112306 has defined IP address 192.168.39.183 and MAC address 52:54:00:d9:00:6e in network mk-no-preload-112306
	I0210 13:19:03.396693  687246 main.go:141] libmachine: (no-preload-112306) Calling .GetSSHPort
	I0210 13:19:03.396848  687246 main.go:141] libmachine: (no-preload-112306) Calling .GetSSHKeyPath
	I0210 13:19:03.396969  687246 main.go:141] libmachine: (no-preload-112306) Calling .GetSSHUsername
	I0210 13:19:03.397071  687246 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/no-preload-112306/id_rsa Username:docker}
	I0210 13:19:03.548463  687246 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:19:03.574737  687246 node_ready.go:35] waiting up to 6m0s for node "no-preload-112306" to be "Ready" ...
	I0210 13:19:03.658457  687246 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 13:19:03.665694  687246 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 13:19:03.686780  687246 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0210 13:19:03.686817  687246 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0210 13:19:03.716503  687246 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0210 13:19:03.716605  687246 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0210 13:19:03.737657  687246 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0210 13:19:03.737697  687246 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0210 13:19:03.773691  687246 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0210 13:19:03.773723  687246 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0210 13:19:03.803244  687246 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0210 13:19:03.803282  687246 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0210 13:19:03.841122  687246 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 13:19:03.841178  687246 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0210 13:19:03.859686  687246 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0210 13:19:03.859730  687246 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0210 13:19:03.893538  687246 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 13:19:03.905468  687246 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0210 13:19:03.905506  687246 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0210 13:19:04.015735  687246 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0210 13:19:04.015779  687246 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0210 13:19:04.095494  687246 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0210 13:19:04.095533  687246 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0210 13:19:04.182570  687246 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0210 13:19:04.182604  687246 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0210 13:19:04.290572  687246 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 13:19:04.290609  687246 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0210 13:19:04.379024  687246 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 13:19:05.269394  687246 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.61086643s)
	I0210 13:19:05.269432  687246 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.603685571s)
	I0210 13:19:05.269459  687246 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:05.269473  687246 main.go:141] libmachine: (no-preload-112306) Calling .Close
	I0210 13:19:05.269483  687246 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:05.269501  687246 main.go:141] libmachine: (no-preload-112306) Calling .Close
	I0210 13:19:05.269795  687246 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:05.269807  687246 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:05.269815  687246 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:05.269822  687246 main.go:141] libmachine: (no-preload-112306) Calling .Close
	I0210 13:19:05.269877  687246 main.go:141] libmachine: (no-preload-112306) DBG | Closing plugin on server side
	I0210 13:19:05.270150  687246 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:05.270163  687246 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:05.270210  687246 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:05.270234  687246 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:05.270244  687246 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:05.270256  687246 main.go:141] libmachine: (no-preload-112306) Calling .Close
	I0210 13:19:05.270475  687246 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:05.270487  687246 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:05.276056  687246 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:05.276071  687246 main.go:141] libmachine: (no-preload-112306) Calling .Close
	I0210 13:19:05.276332  687246 main.go:141] libmachine: (no-preload-112306) DBG | Closing plugin on server side
	I0210 13:19:05.276387  687246 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:05.276403  687246 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:05.336330  687246 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.442720717s)
	I0210 13:19:05.336392  687246 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:05.336411  687246 main.go:141] libmachine: (no-preload-112306) Calling .Close
	I0210 13:19:05.336680  687246 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:05.336699  687246 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:05.336709  687246 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:05.336717  687246 main.go:141] libmachine: (no-preload-112306) Calling .Close
	I0210 13:19:05.336939  687246 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:05.336971  687246 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:05.336983  687246 addons.go:479] Verifying addon metrics-server=true in "no-preload-112306"
	I0210 13:19:05.581781  687246 node_ready.go:53] node "no-preload-112306" has status "Ready":"False"
	I0210 13:19:05.725792  687246 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.346717939s)
	I0210 13:19:05.725858  687246 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:05.725874  687246 main.go:141] libmachine: (no-preload-112306) Calling .Close
	I0210 13:19:05.726316  687246 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:05.726386  687246 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:05.726393  687246 main.go:141] libmachine: (no-preload-112306) DBG | Closing plugin on server side
	I0210 13:19:05.726429  687246 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:05.726457  687246 main.go:141] libmachine: (no-preload-112306) Calling .Close
	I0210 13:19:05.726767  687246 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:05.726826  687246 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:05.729250  687246 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-112306 addons enable metrics-server
	
	I0210 13:19:05.730726  687246 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0210 13:19:05.732260  687246 addons.go:514] duration metric: took 2.459928393s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0210 13:19:03.118249  679972 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8443/healthz ...
	I0210 13:19:03.118918  679972 api_server.go:269] stopped: https://192.168.50.25:8443/healthz: Get "https://192.168.50.25:8443/healthz": dial tcp 192.168.50.25:8443: connect: connection refused
	I0210 13:19:03.118980  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:19:03.119040  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:19:03.156786  679972 cri.go:89] found id: "cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:19:03.156814  679972 cri.go:89] found id: ""
	I0210 13:19:03.156824  679972 logs.go:282] 1 containers: [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99]
	I0210 13:19:03.156890  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:03.161022  679972 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:19:03.161124  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:19:03.206717  679972 cri.go:89] found id: "20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:19:03.206756  679972 cri.go:89] found id: ""
	I0210 13:19:03.206767  679972 logs.go:282] 1 containers: [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2]
	I0210 13:19:03.206840  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:03.212265  679972 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:19:03.212353  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:19:03.264572  679972 cri.go:89] found id: "598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:19:03.264605  679972 cri.go:89] found id: "de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:19:03.264611  679972 cri.go:89] found id: ""
	I0210 13:19:03.264620  679972 logs.go:282] 2 containers: [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f]
	I0210 13:19:03.264689  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:03.270577  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:03.275726  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:19:03.275784  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:19:03.333677  679972 cri.go:89] found id: "4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:19:03.333696  679972 cri.go:89] found id: ""
	I0210 13:19:03.333706  679972 logs.go:282] 1 containers: [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae]
	I0210 13:19:03.333758  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:03.353414  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:19:03.353490  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:19:03.422730  679972 cri.go:89] found id: "489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:19:03.422757  679972 cri.go:89] found id: ""
	I0210 13:19:03.422767  679972 logs.go:282] 1 containers: [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0]
	I0210 13:19:03.422827  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:03.427196  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:19:03.427285  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:19:03.462921  679972 cri.go:89] found id: "c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:19:03.462947  679972 cri.go:89] found id: ""
	I0210 13:19:03.462957  679972 logs.go:282] 1 containers: [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79]
	I0210 13:19:03.463024  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:03.467319  679972 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:19:03.467411  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:19:03.511500  679972 cri.go:89] found id: ""
	I0210 13:19:03.511537  679972 logs.go:282] 0 containers: []
	W0210 13:19:03.511550  679972 logs.go:284] No container was found matching "kindnet"
	I0210 13:19:03.511560  679972 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0210 13:19:03.511635  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 13:19:03.548524  679972 cri.go:89] found id: "27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:19:03.548546  679972 cri.go:89] found id: ""
	I0210 13:19:03.548555  679972 logs.go:282] 1 containers: [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b]
	I0210 13:19:03.548616  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:03.556113  679972 logs.go:123] Gathering logs for coredns [de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f] ...
	I0210 13:19:03.556150  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:19:03.593916  679972 logs.go:123] Gathering logs for kube-scheduler [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae] ...
	I0210 13:19:03.593958  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:19:03.654956  679972 logs.go:123] Gathering logs for kube-proxy [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0] ...
	I0210 13:19:03.655011  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:19:03.701698  679972 logs.go:123] Gathering logs for kube-controller-manager [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79] ...
	I0210 13:19:03.701739  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:19:03.746313  679972 logs.go:123] Gathering logs for storage-provisioner [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b] ...
	I0210 13:19:03.746348  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:19:03.797672  679972 logs.go:123] Gathering logs for kubelet ...
	I0210 13:19:03.797710  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:19:03.951367  679972 logs.go:123] Gathering logs for kube-apiserver [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99] ...
	I0210 13:19:03.951423  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:19:04.006606  679972 logs.go:123] Gathering logs for etcd [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2] ...
	I0210 13:19:04.006643  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:19:04.062850  679972 logs.go:123] Gathering logs for coredns [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d] ...
	I0210 13:19:04.062907  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:19:04.101711  679972 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:19:04.101751  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:19:04.408746  679972 logs.go:123] Gathering logs for container status ...
	I0210 13:19:04.408793  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:19:04.467339  679972 logs.go:123] Gathering logs for dmesg ...
	I0210 13:19:04.467397  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:19:04.490353  679972 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:19:04.490386  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:19:04.600967  679972 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:19:04.363800  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:04.364398  687731 main.go:141] libmachine: (embed-certs-396582) DBG | unable to find current IP address of domain embed-certs-396582 in network mk-embed-certs-396582
	I0210 13:19:04.364424  687731 main.go:141] libmachine: (embed-certs-396582) DBG | I0210 13:19:04.364397  687765 retry.go:31] will retry after 3.163819363s: waiting for domain to come up
	I0210 13:19:07.530387  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:07.530976  687731 main.go:141] libmachine: (embed-certs-396582) DBG | unable to find current IP address of domain embed-certs-396582 in network mk-embed-certs-396582
	I0210 13:19:07.531007  687731 main.go:141] libmachine: (embed-certs-396582) DBG | I0210 13:19:07.530922  687765 retry.go:31] will retry after 4.157949996s: waiting for domain to come up
	I0210 13:19:08.078511  687246 node_ready.go:53] node "no-preload-112306" has status "Ready":"False"
	I0210 13:19:10.079225  687246 node_ready.go:53] node "no-preload-112306" has status "Ready":"False"
	I0210 13:19:10.580132  687246 node_ready.go:49] node "no-preload-112306" has status "Ready":"True"
	I0210 13:19:10.580168  687246 node_ready.go:38] duration metric: took 7.005384959s for node "no-preload-112306" to be "Ready" ...
	I0210 13:19:10.580183  687246 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 13:19:10.582879  687246 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-4h69k" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:07.101929  679972 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8443/healthz ...
	I0210 13:19:07.102734  679972 api_server.go:269] stopped: https://192.168.50.25:8443/healthz: Get "https://192.168.50.25:8443/healthz": dial tcp 192.168.50.25:8443: connect: connection refused
	I0210 13:19:07.102804  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:19:07.102876  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:19:07.138904  679972 cri.go:89] found id: "cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:19:07.138929  679972 cri.go:89] found id: ""
	I0210 13:19:07.138937  679972 logs.go:282] 1 containers: [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99]
	I0210 13:19:07.138994  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:07.142923  679972 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:19:07.142974  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:19:07.175205  679972 cri.go:89] found id: "20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:19:07.175229  679972 cri.go:89] found id: ""
	I0210 13:19:07.175238  679972 logs.go:282] 1 containers: [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2]
	I0210 13:19:07.175293  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:07.179410  679972 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:19:07.179493  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:19:07.214267  679972 cri.go:89] found id: "598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:19:07.214300  679972 cri.go:89] found id: "de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:19:07.214304  679972 cri.go:89] found id: ""
	I0210 13:19:07.214433  679972 logs.go:282] 2 containers: [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f]
	I0210 13:19:07.214784  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:07.220350  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:07.224368  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:19:07.224446  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:19:07.258607  679972 cri.go:89] found id: "4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:19:07.258639  679972 cri.go:89] found id: ""
	I0210 13:19:07.258650  679972 logs.go:282] 1 containers: [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae]
	I0210 13:19:07.258721  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:07.262517  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:19:07.262593  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:19:07.303312  679972 cri.go:89] found id: "489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:19:07.303335  679972 cri.go:89] found id: ""
	I0210 13:19:07.303344  679972 logs.go:282] 1 containers: [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0]
	I0210 13:19:07.303399  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:07.308499  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:19:07.308594  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:19:07.350799  679972 cri.go:89] found id: "c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:19:07.350829  679972 cri.go:89] found id: ""
	I0210 13:19:07.350839  679972 logs.go:282] 1 containers: [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79]
	I0210 13:19:07.350908  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:07.354738  679972 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:19:07.354799  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:19:07.390681  679972 cri.go:89] found id: ""
	I0210 13:19:07.390716  679972 logs.go:282] 0 containers: []
	W0210 13:19:07.390730  679972 logs.go:284] No container was found matching "kindnet"
	I0210 13:19:07.390739  679972 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0210 13:19:07.390813  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 13:19:07.423661  679972 cri.go:89] found id: "27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:19:07.423693  679972 cri.go:89] found id: ""
	I0210 13:19:07.423704  679972 logs.go:282] 1 containers: [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b]
	I0210 13:19:07.423762  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:07.427584  679972 logs.go:123] Gathering logs for etcd [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2] ...
	I0210 13:19:07.427611  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:19:07.464055  679972 logs.go:123] Gathering logs for kube-proxy [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0] ...
	I0210 13:19:07.464104  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:19:07.502854  679972 logs.go:123] Gathering logs for storage-provisioner [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b] ...
	I0210 13:19:07.502893  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:19:07.542781  679972 logs.go:123] Gathering logs for container status ...
	I0210 13:19:07.542813  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:19:07.580724  679972 logs.go:123] Gathering logs for dmesg ...
	I0210 13:19:07.580771  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:19:07.597955  679972 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:19:07.597987  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:19:07.671656  679972 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:19:07.671692  679972 logs.go:123] Gathering logs for kube-apiserver [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99] ...
	I0210 13:19:07.671711  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:19:07.710206  679972 logs.go:123] Gathering logs for coredns [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d] ...
	I0210 13:19:07.710242  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:19:07.745443  679972 logs.go:123] Gathering logs for coredns [de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f] ...
	I0210 13:19:07.745483  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:19:07.784068  679972 logs.go:123] Gathering logs for kube-scheduler [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae] ...
	I0210 13:19:07.784109  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:19:07.830222  679972 logs.go:123] Gathering logs for kube-controller-manager [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79] ...
	I0210 13:19:07.830255  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:19:07.862575  679972 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:19:07.862603  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:19:08.154723  679972 logs.go:123] Gathering logs for kubelet ...
	I0210 13:19:08.154765  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:19:10.762025  679972 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8443/healthz ...
	I0210 13:19:10.762674  679972 api_server.go:269] stopped: https://192.168.50.25:8443/healthz: Get "https://192.168.50.25:8443/healthz": dial tcp 192.168.50.25:8443: connect: connection refused
	I0210 13:19:10.762734  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:19:10.762784  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:19:10.806736  679972 cri.go:89] found id: "cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:19:10.806771  679972 cri.go:89] found id: ""
	I0210 13:19:10.806782  679972 logs.go:282] 1 containers: [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99]
	I0210 13:19:10.806849  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:10.811051  679972 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:19:10.811124  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:19:10.844453  679972 cri.go:89] found id: "20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:19:10.844489  679972 cri.go:89] found id: ""
	I0210 13:19:10.844502  679972 logs.go:282] 1 containers: [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2]
	I0210 13:19:10.844572  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:10.848694  679972 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:19:10.848760  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:19:10.891161  679972 cri.go:89] found id: "598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:19:10.891187  679972 cri.go:89] found id: "de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:19:10.891194  679972 cri.go:89] found id: ""
	I0210 13:19:10.891202  679972 logs.go:282] 2 containers: [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f]
	I0210 13:19:10.891254  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:10.895399  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:10.898969  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:19:10.899033  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:19:10.939931  679972 cri.go:89] found id: "4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:19:10.939954  679972 cri.go:89] found id: ""
	I0210 13:19:10.939963  679972 logs.go:282] 1 containers: [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae]
	I0210 13:19:10.940014  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:10.943949  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:19:10.944017  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:19:10.984895  679972 cri.go:89] found id: "489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:19:10.984921  679972 cri.go:89] found id: ""
	I0210 13:19:10.984929  679972 logs.go:282] 1 containers: [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0]
	I0210 13:19:10.984989  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:10.988973  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:19:10.989033  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:19:11.027866  679972 cri.go:89] found id: "c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:19:11.027893  679972 cri.go:89] found id: ""
	I0210 13:19:11.027903  679972 logs.go:282] 1 containers: [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79]
	I0210 13:19:11.027969  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:11.032597  679972 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:19:11.032668  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:19:11.074810  679972 cri.go:89] found id: ""
	I0210 13:19:11.074844  679972 logs.go:282] 0 containers: []
	W0210 13:19:11.074856  679972 logs.go:284] No container was found matching "kindnet"
	I0210 13:19:11.074864  679972 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0210 13:19:11.074932  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 13:19:11.109980  679972 cri.go:89] found id: "27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:19:11.110009  679972 cri.go:89] found id: ""
	I0210 13:19:11.110019  679972 logs.go:282] 1 containers: [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b]
	I0210 13:19:11.110080  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:11.114176  679972 logs.go:123] Gathering logs for coredns [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d] ...
	I0210 13:19:11.114200  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:19:11.151349  679972 logs.go:123] Gathering logs for kube-proxy [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0] ...
	I0210 13:19:11.151393  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:19:11.187878  679972 logs.go:123] Gathering logs for kube-controller-manager [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79] ...
	I0210 13:19:11.187920  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:19:11.228501  679972 logs.go:123] Gathering logs for storage-provisioner [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b] ...
	I0210 13:19:11.228547  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:19:11.266472  679972 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:19:11.266511  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:19:11.596498  679972 logs.go:123] Gathering logs for kubelet ...
	I0210 13:19:11.596540  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:19:11.690951  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:11.691508  687731 main.go:141] libmachine: (embed-certs-396582) found domain IP: 192.168.61.97
	I0210 13:19:11.691532  687731 main.go:141] libmachine: (embed-certs-396582) reserving static IP address...
	I0210 13:19:11.691561  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has current primary IP address 192.168.61.97 and MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:11.692069  687731 main.go:141] libmachine: (embed-certs-396582) DBG | found host DHCP lease matching {name: "embed-certs-396582", mac: "52:54:00:df:11:43", ip: "192.168.61.97"} in network mk-embed-certs-396582: {Iface:virbr4 ExpiryTime:2025-02-10 14:19:04 +0000 UTC Type:0 Mac:52:54:00:df:11:43 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:embed-certs-396582 Clientid:01:52:54:00:df:11:43}
	I0210 13:19:11.692107  687731 main.go:141] libmachine: (embed-certs-396582) reserved static IP address 192.168.61.97 for domain embed-certs-396582
	I0210 13:19:11.692132  687731 main.go:141] libmachine: (embed-certs-396582) DBG | skip adding static IP to network mk-embed-certs-396582 - found existing host DHCP lease matching {name: "embed-certs-396582", mac: "52:54:00:df:11:43", ip: "192.168.61.97"}
	I0210 13:19:11.692150  687731 main.go:141] libmachine: (embed-certs-396582) DBG | Getting to WaitForSSH function...
	I0210 13:19:11.692196  687731 main.go:141] libmachine: (embed-certs-396582) waiting for SSH...
	I0210 13:19:11.694579  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:11.694993  687731 main.go:141] libmachine: (embed-certs-396582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:11:43", ip: ""} in network mk-embed-certs-396582: {Iface:virbr4 ExpiryTime:2025-02-10 14:19:04 +0000 UTC Type:0 Mac:52:54:00:df:11:43 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:embed-certs-396582 Clientid:01:52:54:00:df:11:43}
	I0210 13:19:11.695036  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined IP address 192.168.61.97 and MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:11.695188  687731 main.go:141] libmachine: (embed-certs-396582) DBG | Using SSH client type: external
	I0210 13:19:11.695218  687731 main.go:141] libmachine: (embed-certs-396582) DBG | Using SSH private key: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/embed-certs-396582/id_rsa (-rw-------)
	I0210 13:19:11.695261  687731 main.go:141] libmachine: (embed-certs-396582) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20383-625153/.minikube/machines/embed-certs-396582/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 13:19:11.695274  687731 main.go:141] libmachine: (embed-certs-396582) DBG | About to run SSH command:
	I0210 13:19:11.695289  687731 main.go:141] libmachine: (embed-certs-396582) DBG | exit 0
	I0210 13:19:11.830124  687731 main.go:141] libmachine: (embed-certs-396582) DBG | SSH cmd err, output: <nil>: 
	I0210 13:19:11.830522  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetConfigRaw
	I0210 13:19:11.831373  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetIP
	I0210 13:19:11.834328  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:11.834761  687731 main.go:141] libmachine: (embed-certs-396582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:11:43", ip: ""} in network mk-embed-certs-396582: {Iface:virbr4 ExpiryTime:2025-02-10 14:19:04 +0000 UTC Type:0 Mac:52:54:00:df:11:43 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:embed-certs-396582 Clientid:01:52:54:00:df:11:43}
	I0210 13:19:11.834820  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined IP address 192.168.61.97 and MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:11.835088  687731 profile.go:143] Saving config to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/embed-certs-396582/config.json ...
	I0210 13:19:11.835338  687731 machine.go:93] provisionDockerMachine start ...
	I0210 13:19:11.835365  687731 main.go:141] libmachine: (embed-certs-396582) Calling .DriverName
	I0210 13:19:11.835648  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHHostname
	I0210 13:19:11.838533  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:11.838909  687731 main.go:141] libmachine: (embed-certs-396582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:11:43", ip: ""} in network mk-embed-certs-396582: {Iface:virbr4 ExpiryTime:2025-02-10 14:19:04 +0000 UTC Type:0 Mac:52:54:00:df:11:43 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:embed-certs-396582 Clientid:01:52:54:00:df:11:43}
	I0210 13:19:11.838929  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined IP address 192.168.61.97 and MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:11.839123  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHPort
	I0210 13:19:11.839309  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHKeyPath
	I0210 13:19:11.839494  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHKeyPath
	I0210 13:19:11.839661  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHUsername
	I0210 13:19:11.839838  687731 main.go:141] libmachine: Using SSH client type: native
	I0210 13:19:11.840237  687731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0210 13:19:11.840259  687731 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 13:19:11.946500  687731 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 13:19:11.946546  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetMachineName
	I0210 13:19:11.946815  687731 buildroot.go:166] provisioning hostname "embed-certs-396582"
	I0210 13:19:11.946853  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetMachineName
	I0210 13:19:11.947047  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHHostname
	I0210 13:19:11.950217  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:11.950661  687731 main.go:141] libmachine: (embed-certs-396582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:11:43", ip: ""} in network mk-embed-certs-396582: {Iface:virbr4 ExpiryTime:2025-02-10 14:19:04 +0000 UTC Type:0 Mac:52:54:00:df:11:43 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:embed-certs-396582 Clientid:01:52:54:00:df:11:43}
	I0210 13:19:11.950693  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined IP address 192.168.61.97 and MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:11.950885  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHPort
	I0210 13:19:11.951073  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHKeyPath
	I0210 13:19:11.951251  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHKeyPath
	I0210 13:19:11.951400  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHUsername
	I0210 13:19:11.951598  687731 main.go:141] libmachine: Using SSH client type: native
	I0210 13:19:11.951835  687731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0210 13:19:11.951860  687731 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-396582 && echo "embed-certs-396582" | sudo tee /etc/hostname
	I0210 13:19:12.074922  687731 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-396582
	
	I0210 13:19:12.074960  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHHostname
	I0210 13:19:12.078199  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:12.078645  687731 main.go:141] libmachine: (embed-certs-396582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:11:43", ip: ""} in network mk-embed-certs-396582: {Iface:virbr4 ExpiryTime:2025-02-10 14:19:04 +0000 UTC Type:0 Mac:52:54:00:df:11:43 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:embed-certs-396582 Clientid:01:52:54:00:df:11:43}
	I0210 13:19:12.078699  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined IP address 192.168.61.97 and MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:12.078852  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHPort
	I0210 13:19:12.079083  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHKeyPath
	I0210 13:19:12.079287  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHKeyPath
	I0210 13:19:12.079475  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHUsername
	I0210 13:19:12.079694  687731 main.go:141] libmachine: Using SSH client type: native
	I0210 13:19:12.079889  687731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0210 13:19:12.079905  687731 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-396582' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-396582/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-396582' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 13:19:12.189586  687731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 13:19:12.189618  687731 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20383-625153/.minikube CaCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20383-625153/.minikube}
	I0210 13:19:12.189655  687731 buildroot.go:174] setting up certificates
	I0210 13:19:12.189667  687731 provision.go:84] configureAuth start
	I0210 13:19:12.189677  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetMachineName
	I0210 13:19:12.190000  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetIP
	I0210 13:19:12.193092  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:12.193593  687731 main.go:141] libmachine: (embed-certs-396582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:11:43", ip: ""} in network mk-embed-certs-396582: {Iface:virbr4 ExpiryTime:2025-02-10 14:19:04 +0000 UTC Type:0 Mac:52:54:00:df:11:43 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:embed-certs-396582 Clientid:01:52:54:00:df:11:43}
	I0210 13:19:12.193626  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined IP address 192.168.61.97 and MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:12.193803  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHHostname
	I0210 13:19:12.196311  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:12.196714  687731 main.go:141] libmachine: (embed-certs-396582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:11:43", ip: ""} in network mk-embed-certs-396582: {Iface:virbr4 ExpiryTime:2025-02-10 14:19:04 +0000 UTC Type:0 Mac:52:54:00:df:11:43 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:embed-certs-396582 Clientid:01:52:54:00:df:11:43}
	I0210 13:19:12.196761  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined IP address 192.168.61.97 and MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:12.196928  687731 provision.go:143] copyHostCerts
	I0210 13:19:12.196998  687731 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem, removing ...
	I0210 13:19:12.197013  687731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem
	I0210 13:19:12.197084  687731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem (1082 bytes)
	I0210 13:19:12.197301  687731 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem, removing ...
	I0210 13:19:12.197316  687731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem
	I0210 13:19:12.197365  687731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem (1123 bytes)
	I0210 13:19:12.197449  687731 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem, removing ...
	I0210 13:19:12.197459  687731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem
	I0210 13:19:12.197495  687731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem (1675 bytes)
	I0210 13:19:12.197563  687731 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem org=jenkins.embed-certs-396582 san=[127.0.0.1 192.168.61.97 embed-certs-396582 localhost minikube]
	I0210 13:19:12.468031  687731 provision.go:177] copyRemoteCerts
	I0210 13:19:12.468112  687731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 13:19:12.468147  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHHostname
	I0210 13:19:12.471325  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:12.471787  687731 main.go:141] libmachine: (embed-certs-396582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:11:43", ip: ""} in network mk-embed-certs-396582: {Iface:virbr4 ExpiryTime:2025-02-10 14:19:04 +0000 UTC Type:0 Mac:52:54:00:df:11:43 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:embed-certs-396582 Clientid:01:52:54:00:df:11:43}
	I0210 13:19:12.471819  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined IP address 192.168.61.97 and MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:12.472036  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHPort
	I0210 13:19:12.472289  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHKeyPath
	I0210 13:19:12.472495  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHUsername
	I0210 13:19:12.472666  687731 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/embed-certs-396582/id_rsa Username:docker}
	I0210 13:19:12.562178  687731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0210 13:19:12.587280  687731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 13:19:12.611521  687731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0210 13:19:12.637875  687731 provision.go:87] duration metric: took 448.19041ms to configureAuth
	I0210 13:19:12.637913  687731 buildroot.go:189] setting minikube options for container-runtime
	I0210 13:19:12.638191  687731 config.go:182] Loaded profile config "embed-certs-396582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:19:12.638289  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHHostname
	I0210 13:19:12.641152  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:12.641563  687731 main.go:141] libmachine: (embed-certs-396582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:11:43", ip: ""} in network mk-embed-certs-396582: {Iface:virbr4 ExpiryTime:2025-02-10 14:19:04 +0000 UTC Type:0 Mac:52:54:00:df:11:43 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:embed-certs-396582 Clientid:01:52:54:00:df:11:43}
	I0210 13:19:12.641596  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined IP address 192.168.61.97 and MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:12.641763  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHPort
	I0210 13:19:12.641921  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHKeyPath
	I0210 13:19:12.642042  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHKeyPath
	I0210 13:19:12.642225  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHUsername
	I0210 13:19:12.642405  687731 main.go:141] libmachine: Using SSH client type: native
	I0210 13:19:12.642587  687731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0210 13:19:12.642601  687731 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 13:19:12.891065  687731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 13:19:12.891104  687731 machine.go:96] duration metric: took 1.055746944s to provisionDockerMachine
	I0210 13:19:12.891118  687731 start.go:293] postStartSetup for "embed-certs-396582" (driver="kvm2")
	I0210 13:19:12.891133  687731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 13:19:12.891160  687731 main.go:141] libmachine: (embed-certs-396582) Calling .DriverName
	I0210 13:19:12.891463  687731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 13:19:12.891496  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHHostname
	I0210 13:19:12.894270  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:12.894668  687731 main.go:141] libmachine: (embed-certs-396582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:11:43", ip: ""} in network mk-embed-certs-396582: {Iface:virbr4 ExpiryTime:2025-02-10 14:19:04 +0000 UTC Type:0 Mac:52:54:00:df:11:43 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:embed-certs-396582 Clientid:01:52:54:00:df:11:43}
	I0210 13:19:12.894699  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined IP address 192.168.61.97 and MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:12.894924  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHPort
	I0210 13:19:12.895105  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHKeyPath
	I0210 13:19:12.895253  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHUsername
	I0210 13:19:12.895367  687731 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/embed-certs-396582/id_rsa Username:docker}
	I0210 13:19:12.982291  687731 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 13:19:12.987354  687731 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 13:19:12.987383  687731 filesync.go:126] Scanning /home/jenkins/minikube-integration/20383-625153/.minikube/addons for local assets ...
	I0210 13:19:12.987456  687731 filesync.go:126] Scanning /home/jenkins/minikube-integration/20383-625153/.minikube/files for local assets ...
	I0210 13:19:12.987547  687731 filesync.go:149] local asset: /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem -> 6323522.pem in /etc/ssl/certs
	I0210 13:19:12.987643  687731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 13:19:13.000365  687731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem --> /etc/ssl/certs/6323522.pem (1708 bytes)
	I0210 13:19:13.026739  687731 start.go:296] duration metric: took 135.597503ms for postStartSetup
	I0210 13:19:13.026794  687731 fix.go:56] duration metric: took 20.064609611s for fixHost
	I0210 13:19:13.026857  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHHostname
	I0210 13:19:13.029632  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:13.029986  687731 main.go:141] libmachine: (embed-certs-396582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:11:43", ip: ""} in network mk-embed-certs-396582: {Iface:virbr4 ExpiryTime:2025-02-10 14:19:04 +0000 UTC Type:0 Mac:52:54:00:df:11:43 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:embed-certs-396582 Clientid:01:52:54:00:df:11:43}
	I0210 13:19:13.030017  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined IP address 192.168.61.97 and MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:13.030198  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHPort
	I0210 13:19:13.030383  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHKeyPath
	I0210 13:19:13.030550  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHKeyPath
	I0210 13:19:13.030679  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHUsername
	I0210 13:19:13.030821  687731 main.go:141] libmachine: Using SSH client type: native
	I0210 13:19:13.031036  687731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0210 13:19:13.031050  687731 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 13:19:13.141589  687731 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739193553.116888015
	
	I0210 13:19:13.141637  687731 fix.go:216] guest clock: 1739193553.116888015
	I0210 13:19:13.141656  687731 fix.go:229] Guest: 2025-02-10 13:19:13.116888015 +0000 UTC Remote: 2025-02-10 13:19:13.02680041 +0000 UTC m=+20.242886405 (delta=90.087605ms)
	I0210 13:19:13.141688  687731 fix.go:200] guest clock delta is within tolerance: 90.087605ms
	I0210 13:19:13.141700  687731 start.go:83] releasing machines lock for "embed-certs-396582", held for 20.179531s
	I0210 13:19:13.141729  687731 main.go:141] libmachine: (embed-certs-396582) Calling .DriverName
	I0210 13:19:13.142005  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetIP
	I0210 13:19:13.144700  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:13.145064  687731 main.go:141] libmachine: (embed-certs-396582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:11:43", ip: ""} in network mk-embed-certs-396582: {Iface:virbr4 ExpiryTime:2025-02-10 14:19:04 +0000 UTC Type:0 Mac:52:54:00:df:11:43 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:embed-certs-396582 Clientid:01:52:54:00:df:11:43}
	I0210 13:19:13.145093  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined IP address 192.168.61.97 and MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:13.145271  687731 main.go:141] libmachine: (embed-certs-396582) Calling .DriverName
	I0210 13:19:13.145693  687731 main.go:141] libmachine: (embed-certs-396582) Calling .DriverName
	I0210 13:19:13.145893  687731 main.go:141] libmachine: (embed-certs-396582) Calling .DriverName
	I0210 13:19:13.145993  687731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 13:19:13.146063  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHHostname
	I0210 13:19:13.146109  687731 ssh_runner.go:195] Run: cat /version.json
	I0210 13:19:13.146136  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHHostname
	I0210 13:19:13.148898  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:13.149183  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:13.149344  687731 main.go:141] libmachine: (embed-certs-396582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:11:43", ip: ""} in network mk-embed-certs-396582: {Iface:virbr4 ExpiryTime:2025-02-10 14:19:04 +0000 UTC Type:0 Mac:52:54:00:df:11:43 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:embed-certs-396582 Clientid:01:52:54:00:df:11:43}
	I0210 13:19:13.149374  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined IP address 192.168.61.97 and MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:13.149647  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHPort
	I0210 13:19:13.149646  687731 main.go:141] libmachine: (embed-certs-396582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:11:43", ip: ""} in network mk-embed-certs-396582: {Iface:virbr4 ExpiryTime:2025-02-10 14:19:04 +0000 UTC Type:0 Mac:52:54:00:df:11:43 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:embed-certs-396582 Clientid:01:52:54:00:df:11:43}
	I0210 13:19:13.149714  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined IP address 192.168.61.97 and MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:13.149875  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHPort
	I0210 13:19:13.149919  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHKeyPath
	I0210 13:19:13.150062  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHUsername
	I0210 13:19:13.150100  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHKeyPath
	I0210 13:19:13.150209  687731 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/embed-certs-396582/id_rsa Username:docker}
	I0210 13:19:13.150253  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHUsername
	I0210 13:19:13.150375  687731 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/embed-certs-396582/id_rsa Username:docker}
	I0210 13:19:13.247006  687731 ssh_runner.go:195] Run: systemctl --version
	I0210 13:19:13.253959  687731 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 13:19:13.405731  687731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 13:19:13.413148  687731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 13:19:13.413235  687731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 13:19:13.429607  687731 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 13:19:13.429635  687731 start.go:495] detecting cgroup driver to use...
	I0210 13:19:13.429714  687731 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 13:19:13.448635  687731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 13:19:13.466021  687731 docker.go:217] disabling cri-docker service (if available) ...
	I0210 13:19:13.466090  687731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 13:19:13.478917  687731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 13:19:13.491658  687731 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 13:19:13.603855  687731 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 13:19:13.759511  687731 docker.go:233] disabling docker service ...
	I0210 13:19:13.759603  687731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 13:19:13.773407  687731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 13:19:13.785573  687731 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 13:19:13.939396  687731 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 13:19:14.068092  687731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 13:19:14.095161  687731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 13:19:14.113577  687731 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0210 13:19:14.113666  687731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:19:14.124114  687731 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 13:19:14.124210  687731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:19:14.134779  687731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:19:14.144846  687731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:19:14.154685  687731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 13:19:14.166477  687731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:19:14.176593  687731 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:19:14.194961  687731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:19:14.205326  687731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 13:19:14.215172  687731 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 13:19:14.215262  687731 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 13:19:14.228776  687731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 13:19:14.240798  687731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:19:14.362545  687731 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 13:19:14.503314  687731 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 13:19:14.503416  687731 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 13:19:14.507852  687731 start.go:563] Will wait 60s for crictl version
	I0210 13:19:14.507915  687731 ssh_runner.go:195] Run: which crictl
	I0210 13:19:14.511484  687731 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 13:19:14.554763  687731 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 13:19:14.554901  687731 ssh_runner.go:195] Run: crio --version
	I0210 13:19:14.586974  687731 ssh_runner.go:195] Run: crio --version
	I0210 13:19:14.675686  687731 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0210 13:19:12.588769  687246 pod_ready.go:103] pod "coredns-668d6bf9bc-4h69k" in "kube-system" namespace has status "Ready":"False"
	I0210 13:19:15.090140  687246 pod_ready.go:103] pod "coredns-668d6bf9bc-4h69k" in "kube-system" namespace has status "Ready":"False"
	I0210 13:19:11.719293  679972 logs.go:123] Gathering logs for dmesg ...
	I0210 13:19:11.719335  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:19:11.737474  679972 logs.go:123] Gathering logs for etcd [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2] ...
	I0210 13:19:11.737514  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:19:11.779227  679972 logs.go:123] Gathering logs for container status ...
	I0210 13:19:11.779268  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:19:11.823702  679972 logs.go:123] Gathering logs for kube-scheduler [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae] ...
	I0210 13:19:11.823748  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:19:11.874435  679972 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:19:11.874473  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:19:11.959576  679972 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:19:11.959598  679972 logs.go:123] Gathering logs for kube-apiserver [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99] ...
	I0210 13:19:11.959615  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:19:12.004065  679972 logs.go:123] Gathering logs for coredns [de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f] ...
	I0210 13:19:12.004106  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:19:14.541209  679972 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8443/healthz ...
	I0210 13:19:14.542010  679972 api_server.go:269] stopped: https://192.168.50.25:8443/healthz: Get "https://192.168.50.25:8443/healthz": dial tcp 192.168.50.25:8443: connect: connection refused
	I0210 13:19:14.542087  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:19:14.542167  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:19:14.579261  679972 cri.go:89] found id: "cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:19:14.579297  679972 cri.go:89] found id: ""
	I0210 13:19:14.579308  679972 logs.go:282] 1 containers: [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99]
	I0210 13:19:14.579380  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:14.583521  679972 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:19:14.583597  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:19:14.618378  679972 cri.go:89] found id: "20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:19:14.618408  679972 cri.go:89] found id: ""
	I0210 13:19:14.618419  679972 logs.go:282] 1 containers: [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2]
	I0210 13:19:14.618486  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:14.622645  679972 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:19:14.622730  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:19:14.657299  679972 cri.go:89] found id: "598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:19:14.657346  679972 cri.go:89] found id: "de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:19:14.657353  679972 cri.go:89] found id: ""
	I0210 13:19:14.657364  679972 logs.go:282] 2 containers: [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f]
	I0210 13:19:14.657426  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:14.661408  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:14.665011  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:19:14.665071  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:19:14.703932  679972 cri.go:89] found id: "4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:19:14.703955  679972 cri.go:89] found id: ""
	I0210 13:19:14.703972  679972 logs.go:282] 1 containers: [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae]
	I0210 13:19:14.704028  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:14.708091  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:19:14.708190  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:19:14.755256  679972 cri.go:89] found id: "489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:19:14.755286  679972 cri.go:89] found id: ""
	I0210 13:19:14.755296  679972 logs.go:282] 1 containers: [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0]
	I0210 13:19:14.755357  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:14.759716  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:19:14.759794  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:19:14.793854  679972 cri.go:89] found id: "c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:19:14.793885  679972 cri.go:89] found id: ""
	I0210 13:19:14.793894  679972 logs.go:282] 1 containers: [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79]
	I0210 13:19:14.793952  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:14.798196  679972 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:19:14.798262  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:19:14.840613  679972 cri.go:89] found id: ""
	I0210 13:19:14.840648  679972 logs.go:282] 0 containers: []
	W0210 13:19:14.840661  679972 logs.go:284] No container was found matching "kindnet"
	I0210 13:19:14.840670  679972 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0210 13:19:14.840747  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 13:19:14.879580  679972 cri.go:89] found id: "27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:19:14.879614  679972 cri.go:89] found id: ""
	I0210 13:19:14.879625  679972 logs.go:282] 1 containers: [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b]
	I0210 13:19:14.879699  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:14.884145  679972 logs.go:123] Gathering logs for kube-proxy [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0] ...
	I0210 13:19:14.884181  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:19:14.930863  679972 logs.go:123] Gathering logs for storage-provisioner [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b] ...
	I0210 13:19:14.930901  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:19:14.977327  679972 logs.go:123] Gathering logs for kubelet ...
	I0210 13:19:14.977367  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:19:15.109081  679972 logs.go:123] Gathering logs for etcd [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2] ...
	I0210 13:19:15.109136  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:19:15.163754  679972 logs.go:123] Gathering logs for kube-scheduler [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae] ...
	I0210 13:19:15.163844  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:19:15.223388  679972 logs.go:123] Gathering logs for coredns [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d] ...
	I0210 13:19:15.223426  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:19:15.280365  679972 logs.go:123] Gathering logs for coredns [de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f] ...
	I0210 13:19:15.280409  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:19:15.332259  679972 logs.go:123] Gathering logs for kube-controller-manager [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79] ...
	I0210 13:19:15.332304  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:19:15.387491  679972 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:19:15.387535  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:19:15.766723  679972 logs.go:123] Gathering logs for container status ...
	I0210 13:19:15.766761  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:19:15.821467  679972 logs.go:123] Gathering logs for dmesg ...
	I0210 13:19:15.821501  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:19:15.839062  679972 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:19:15.839111  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:19:15.925520  679972 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:19:15.925555  679972 logs.go:123] Gathering logs for kube-apiserver [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99] ...
	I0210 13:19:15.925573  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:19:14.677426  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetIP
	I0210 13:19:14.680689  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:14.681097  687731 main.go:141] libmachine: (embed-certs-396582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:11:43", ip: ""} in network mk-embed-certs-396582: {Iface:virbr4 ExpiryTime:2025-02-10 14:19:04 +0000 UTC Type:0 Mac:52:54:00:df:11:43 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:embed-certs-396582 Clientid:01:52:54:00:df:11:43}
	I0210 13:19:14.681165  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined IP address 192.168.61.97 and MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:14.681405  687731 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0210 13:19:14.685908  687731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:19:14.698302  687731 kubeadm.go:883] updating cluster {Name:embed-certs-396582 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-396582 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 13:19:14.698511  687731 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 13:19:14.698598  687731 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:19:14.740370  687731 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0210 13:19:14.740449  687731 ssh_runner.go:195] Run: which lz4
	I0210 13:19:14.744494  687731 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 13:19:14.749834  687731 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 13:19:14.749877  687731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0210 13:19:16.025821  687731 crio.go:462] duration metric: took 1.281355592s to copy over tarball
	I0210 13:19:16.025933  687731 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 13:19:17.113610  687246 pod_ready.go:93] pod "coredns-668d6bf9bc-4h69k" in "kube-system" namespace has status "Ready":"True"
	I0210 13:19:17.113639  687246 pod_ready.go:82] duration metric: took 6.530731395s for pod "coredns-668d6bf9bc-4h69k" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:17.113653  687246 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-112306" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:17.119340  687246 pod_ready.go:93] pod "etcd-no-preload-112306" in "kube-system" namespace has status "Ready":"True"
	I0210 13:19:17.119365  687246 pod_ready.go:82] duration metric: took 5.704333ms for pod "etcd-no-preload-112306" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:17.119378  687246 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-112306" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:17.129768  687246 pod_ready.go:93] pod "kube-apiserver-no-preload-112306" in "kube-system" namespace has status "Ready":"True"
	I0210 13:19:17.129796  687246 pod_ready.go:82] duration metric: took 10.404603ms for pod "kube-apiserver-no-preload-112306" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:17.129809  687246 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-112306" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:17.134478  687246 pod_ready.go:93] pod "kube-controller-manager-no-preload-112306" in "kube-system" namespace has status "Ready":"True"
	I0210 13:19:17.134500  687246 pod_ready.go:82] duration metric: took 4.683148ms for pod "kube-controller-manager-no-preload-112306" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:17.134512  687246 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-l2wxd" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:17.139180  687246 pod_ready.go:93] pod "kube-proxy-l2wxd" in "kube-system" namespace has status "Ready":"True"
	I0210 13:19:17.139200  687246 pod_ready.go:82] duration metric: took 4.680327ms for pod "kube-proxy-l2wxd" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:17.139210  687246 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-112306" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:17.487822  687246 pod_ready.go:93] pod "kube-scheduler-no-preload-112306" in "kube-system" namespace has status "Ready":"True"
	I0210 13:19:17.487859  687246 pod_ready.go:82] duration metric: took 348.639916ms for pod "kube-scheduler-no-preload-112306" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:17.487875  687246 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-f79f97bbb-r9f86" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:19.492874  687246 pod_ready.go:103] pod "metrics-server-f79f97bbb-r9f86" in "kube-system" namespace has status "Ready":"False"
	I0210 13:19:18.477967  679972 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8443/healthz ...
	I0210 13:19:18.478692  679972 api_server.go:269] stopped: https://192.168.50.25:8443/healthz: Get "https://192.168.50.25:8443/healthz": dial tcp 192.168.50.25:8443: connect: connection refused
	I0210 13:19:18.478767  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:19:18.478836  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:19:18.524576  679972 cri.go:89] found id: "cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:19:18.524614  679972 cri.go:89] found id: ""
	I0210 13:19:18.524626  679972 logs.go:282] 1 containers: [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99]
	I0210 13:19:18.524690  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:18.528738  679972 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:19:18.528804  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:19:18.574612  679972 cri.go:89] found id: "20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:19:18.574641  679972 cri.go:89] found id: ""
	I0210 13:19:18.574651  679972 logs.go:282] 1 containers: [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2]
	I0210 13:19:18.574718  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:18.580739  679972 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:19:18.580830  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:19:18.628238  679972 cri.go:89] found id: "598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:19:18.628267  679972 cri.go:89] found id: "de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:19:18.628278  679972 cri.go:89] found id: ""
	I0210 13:19:18.628285  679972 logs.go:282] 2 containers: [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f]
	I0210 13:19:18.628348  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:18.633288  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:18.637845  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:19:18.637918  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:19:18.680489  679972 cri.go:89] found id: "4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:19:18.680526  679972 cri.go:89] found id: ""
	I0210 13:19:18.680537  679972 logs.go:282] 1 containers: [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae]
	I0210 13:19:18.680600  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:18.688684  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:19:18.688786  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:19:18.749951  679972 cri.go:89] found id: "489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:19:18.749977  679972 cri.go:89] found id: ""
	I0210 13:19:18.749996  679972 logs.go:282] 1 containers: [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0]
	I0210 13:19:18.750054  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:18.755869  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:19:18.755953  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:19:18.805669  679972 cri.go:89] found id: "c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:19:18.805702  679972 cri.go:89] found id: ""
	I0210 13:19:18.805713  679972 logs.go:282] 1 containers: [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79]
	I0210 13:19:18.805849  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:18.811634  679972 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:19:18.811750  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:19:18.859619  679972 cri.go:89] found id: ""
	I0210 13:19:18.859671  679972 logs.go:282] 0 containers: []
	W0210 13:19:18.859682  679972 logs.go:284] No container was found matching "kindnet"
	I0210 13:19:18.859691  679972 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0210 13:19:18.859760  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 13:19:18.909651  679972 cri.go:89] found id: "27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:19:18.909680  679972 cri.go:89] found id: ""
	I0210 13:19:18.909691  679972 logs.go:282] 1 containers: [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b]
	I0210 13:19:18.909752  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:18.915144  679972 logs.go:123] Gathering logs for dmesg ...
	I0210 13:19:18.915181  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:19:18.933933  679972 logs.go:123] Gathering logs for etcd [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2] ...
	I0210 13:19:18.933972  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:19:18.992355  679972 logs.go:123] Gathering logs for kube-controller-manager [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79] ...
	I0210 13:19:18.992392  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:19:19.043192  679972 logs.go:123] Gathering logs for storage-provisioner [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b] ...
	I0210 13:19:19.043229  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:19:19.089042  679972 logs.go:123] Gathering logs for kube-scheduler [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae] ...
	I0210 13:19:19.089080  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:19:19.143557  679972 logs.go:123] Gathering logs for kube-proxy [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0] ...
	I0210 13:19:19.143595  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:19:19.184375  679972 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:19:19.184403  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:19:19.557933  679972 logs.go:123] Gathering logs for kubelet ...
	I0210 13:19:19.557969  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:19:19.681327  679972 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:19:19.681459  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:19:19.761905  679972 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:19:19.761942  679972 logs.go:123] Gathering logs for kube-apiserver [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99] ...
	I0210 13:19:19.761958  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:19:19.801420  679972 logs.go:123] Gathering logs for coredns [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d] ...
	I0210 13:19:19.801459  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:19:19.839042  679972 logs.go:123] Gathering logs for coredns [de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f] ...
	I0210 13:19:19.839092  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:19:19.883077  679972 logs.go:123] Gathering logs for container status ...
	I0210 13:19:19.883125  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:19:18.291216  687731 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.265242505s)
	I0210 13:19:18.291251  687731 crio.go:469] duration metric: took 2.265382509s to extract the tarball
	I0210 13:19:18.291261  687731 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 13:19:18.329565  687731 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:19:18.378907  687731 crio.go:514] all images are preloaded for cri-o runtime.
	I0210 13:19:18.378944  687731 cache_images.go:84] Images are preloaded, skipping loading
	I0210 13:19:18.378956  687731 kubeadm.go:934] updating node { 192.168.61.97 8443 v1.32.1 crio true true} ...
	I0210 13:19:18.379117  687731 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-396582 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:embed-certs-396582 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 13:19:18.379215  687731 ssh_runner.go:195] Run: crio config
	I0210 13:19:18.427252  687731 cni.go:84] Creating CNI manager for ""
	I0210 13:19:18.427278  687731 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:19:18.427288  687731 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 13:19:18.427316  687731 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.97 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-396582 NodeName:embed-certs-396582 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 13:19:18.427469  687731 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-396582"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.97"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.97"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 13:19:18.427559  687731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 13:19:18.440285  687731 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 13:19:18.440348  687731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 13:19:18.451798  687731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0210 13:19:18.470131  687731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 13:19:18.488671  687731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0210 13:19:18.510350  687731 ssh_runner.go:195] Run: grep 192.168.61.97	control-plane.minikube.internal$ /etc/hosts
	I0210 13:19:18.514677  687731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:19:18.532321  687731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:19:18.682133  687731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:19:18.701315  687731 certs.go:68] Setting up /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/embed-certs-396582 for IP: 192.168.61.97
	I0210 13:19:18.701344  687731 certs.go:194] generating shared ca certs ...
	I0210 13:19:18.701368  687731 certs.go:226] acquiring lock for ca certs: {Name:mkf2f72f82a6bd7e3c16bb224cd26b80c3c89e0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:19:18.701572  687731 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key
	I0210 13:19:18.701635  687731 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key
	I0210 13:19:18.701655  687731 certs.go:256] generating profile certs ...
	I0210 13:19:18.701780  687731 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/embed-certs-396582/client.key
	I0210 13:19:18.701868  687731 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/embed-certs-396582/apiserver.key.ae8ab186
	I0210 13:19:18.701944  687731 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/embed-certs-396582/proxy-client.key
	I0210 13:19:18.702104  687731 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352.pem (1338 bytes)
	W0210 13:19:18.702148  687731 certs.go:480] ignoring /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352_empty.pem, impossibly tiny 0 bytes
	I0210 13:19:18.702162  687731 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem (1675 bytes)
	I0210 13:19:18.702200  687731 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem (1082 bytes)
	I0210 13:19:18.702239  687731 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem (1123 bytes)
	I0210 13:19:18.702273  687731 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem (1675 bytes)
	I0210 13:19:18.702339  687731 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem (1708 bytes)
	I0210 13:19:18.703124  687731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 13:19:18.748745  687731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 13:19:18.793314  687731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 13:19:18.830224  687731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 13:19:18.868885  687731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/embed-certs-396582/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0210 13:19:18.904669  687731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/embed-certs-396582/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 13:19:18.942614  687731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/embed-certs-396582/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 13:19:18.971534  687731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/embed-certs-396582/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0210 13:19:19.000430  687731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem --> /usr/share/ca-certificates/6323522.pem (1708 bytes)
	I0210 13:19:19.028657  687731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 13:19:19.057554  687731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352.pem --> /usr/share/ca-certificates/632352.pem (1338 bytes)
	I0210 13:19:19.089158  687731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 13:19:19.107077  687731 ssh_runner.go:195] Run: openssl version
	I0210 13:19:19.113462  687731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6323522.pem && ln -fs /usr/share/ca-certificates/6323522.pem /etc/ssl/certs/6323522.pem"
	I0210 13:19:19.125294  687731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6323522.pem
	I0210 13:19:19.130499  687731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 12:13 /usr/share/ca-certificates/6323522.pem
	I0210 13:19:19.130573  687731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6323522.pem
	I0210 13:19:19.136831  687731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6323522.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 13:19:19.147579  687731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 13:19:19.159532  687731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:19:19.164521  687731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:19:19.164599  687731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:19:19.170919  687731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 13:19:19.183342  687731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/632352.pem && ln -fs /usr/share/ca-certificates/632352.pem /etc/ssl/certs/632352.pem"
	I0210 13:19:19.198041  687731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/632352.pem
	I0210 13:19:19.203892  687731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 12:13 /usr/share/ca-certificates/632352.pem
	I0210 13:19:19.203974  687731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/632352.pem
	I0210 13:19:19.210979  687731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/632352.pem /etc/ssl/certs/51391683.0"
	I0210 13:19:19.224794  687731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 13:19:19.229858  687731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 13:19:19.236551  687731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 13:19:19.242097  687731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 13:19:19.247503  687731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 13:19:19.252972  687731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 13:19:19.260593  687731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 13:19:19.266401  687731 kubeadm.go:392] StartCluster: {Name:embed-certs-396582 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-396582 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.97 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:19:19.266509  687731 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 13:19:19.266564  687731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:19:19.302037  687731 cri.go:89] found id: ""
	I0210 13:19:19.302119  687731 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 13:19:19.312320  687731 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0210 13:19:19.312347  687731 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0210 13:19:19.312421  687731 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0210 13:19:19.324791  687731 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0210 13:19:19.325761  687731 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-396582" does not appear in /home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:19:19.326262  687731 kubeconfig.go:62] /home/jenkins/minikube-integration/20383-625153/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-396582" cluster setting kubeconfig missing "embed-certs-396582" context setting]
	I0210 13:19:19.326862  687731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/kubeconfig: {Name:mke7ef1ff4ff1259856291979fdd0337df3a08b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:19:19.328328  687731 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0210 13:19:19.341316  687731 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.97
	I0210 13:19:19.341357  687731 kubeadm.go:1160] stopping kube-system containers ...
	I0210 13:19:19.341372  687731 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0210 13:19:19.341433  687731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:19:19.384713  687731 cri.go:89] found id: ""
	I0210 13:19:19.384806  687731 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0210 13:19:19.406445  687731 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:19:19.416347  687731 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:19:19.416374  687731 kubeadm.go:157] found existing configuration files:
	
	I0210 13:19:19.416440  687731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:19:19.426549  687731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:19:19.426639  687731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:19:19.436938  687731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:19:19.445782  687731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:19:19.445846  687731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:19:19.454519  687731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:19:19.463128  687731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:19:19.463190  687731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:19:19.474803  687731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:19:19.484214  687731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:19:19.484281  687731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:19:19.494088  687731 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 13:19:19.513270  687731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:19:19.644146  687731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:19:20.496973  687731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:19:20.735399  687731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:19:20.801744  687731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:19:20.902094  687731 api_server.go:52] waiting for apiserver process to appear ...
	I0210 13:19:20.902192  687731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:19:21.403256  687731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:19:21.902924  687731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:19:22.402328  687731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:19:22.432372  687246 pod_ready.go:103] pod "metrics-server-f79f97bbb-r9f86" in "kube-system" namespace has status "Ready":"False"
	I0210 13:19:24.493365  687246 pod_ready.go:103] pod "metrics-server-f79f97bbb-r9f86" in "kube-system" namespace has status "Ready":"False"
	I0210 13:19:22.424785  679972 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8443/healthz ...
	I0210 13:19:22.425633  679972 api_server.go:269] stopped: https://192.168.50.25:8443/healthz: Get "https://192.168.50.25:8443/healthz": dial tcp 192.168.50.25:8443: connect: connection refused
	I0210 13:19:22.425711  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:19:22.425781  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:19:22.476406  679972 cri.go:89] found id: "cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:19:22.476441  679972 cri.go:89] found id: ""
	I0210 13:19:22.476452  679972 logs.go:282] 1 containers: [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99]
	I0210 13:19:22.476523  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:22.482628  679972 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:19:22.482728  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:19:22.526433  679972 cri.go:89] found id: "20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:19:22.526464  679972 cri.go:89] found id: ""
	I0210 13:19:22.526475  679972 logs.go:282] 1 containers: [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2]
	I0210 13:19:22.526541  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:22.532039  679972 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:19:22.532123  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:19:22.587261  679972 cri.go:89] found id: "598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:19:22.587292  679972 cri.go:89] found id: "de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:19:22.587298  679972 cri.go:89] found id: ""
	I0210 13:19:22.587308  679972 logs.go:282] 2 containers: [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f]
	I0210 13:19:22.587383  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:22.592569  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:22.597355  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:19:22.597457  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:19:22.637484  679972 cri.go:89] found id: "4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:19:22.637513  679972 cri.go:89] found id: ""
	I0210 13:19:22.637525  679972 logs.go:282] 1 containers: [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae]
	I0210 13:19:22.637585  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:22.642005  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:19:22.642121  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:19:22.690819  679972 cri.go:89] found id: "489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:19:22.690851  679972 cri.go:89] found id: ""
	I0210 13:19:22.690862  679972 logs.go:282] 1 containers: [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0]
	I0210 13:19:22.690930  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:22.696576  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:19:22.696673  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:19:22.739394  679972 cri.go:89] found id: "c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:19:22.739428  679972 cri.go:89] found id: ""
	I0210 13:19:22.739440  679972 logs.go:282] 1 containers: [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79]
	I0210 13:19:22.739507  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:22.744836  679972 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:19:22.744963  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:19:22.793319  679972 cri.go:89] found id: ""
	I0210 13:19:22.793403  679972 logs.go:282] 0 containers: []
	W0210 13:19:22.793420  679972 logs.go:284] No container was found matching "kindnet"
	I0210 13:19:22.793430  679972 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0210 13:19:22.793500  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 13:19:22.839642  679972 cri.go:89] found id: "27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:19:22.839674  679972 cri.go:89] found id: ""
	I0210 13:19:22.839686  679972 logs.go:282] 1 containers: [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b]
	I0210 13:19:22.839752  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:22.844876  679972 logs.go:123] Gathering logs for kube-apiserver [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99] ...
	I0210 13:19:22.844901  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:19:22.895787  679972 logs.go:123] Gathering logs for etcd [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2] ...
	I0210 13:19:22.895840  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:19:22.948862  679972 logs.go:123] Gathering logs for coredns [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d] ...
	I0210 13:19:22.948904  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:19:22.993599  679972 logs.go:123] Gathering logs for kube-proxy [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0] ...
	I0210 13:19:22.993637  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:19:23.035718  679972 logs.go:123] Gathering logs for container status ...
	I0210 13:19:23.035759  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:19:23.076332  679972 logs.go:123] Gathering logs for kubelet ...
	I0210 13:19:23.076380  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:19:23.202421  679972 logs.go:123] Gathering logs for dmesg ...
	I0210 13:19:23.202463  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:19:23.217208  679972 logs.go:123] Gathering logs for kube-scheduler [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae] ...
	I0210 13:19:23.217253  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:19:23.273124  679972 logs.go:123] Gathering logs for kube-controller-manager [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79] ...
	I0210 13:19:23.273161  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:19:23.310853  679972 logs.go:123] Gathering logs for storage-provisioner [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b] ...
	I0210 13:19:23.310902  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:19:23.353948  679972 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:19:23.353987  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:19:23.649696  679972 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:19:23.649744  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:19:23.741170  679972 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:19:23.741197  679972 logs.go:123] Gathering logs for coredns [de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f] ...
	I0210 13:19:23.741215  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:19:26.288018  679972 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8443/healthz ...
	I0210 13:19:26.288710  679972 api_server.go:269] stopped: https://192.168.50.25:8443/healthz: Get "https://192.168.50.25:8443/healthz": dial tcp 192.168.50.25:8443: connect: connection refused
	I0210 13:19:26.288782  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:19:26.288856  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:19:26.325742  679972 cri.go:89] found id: "cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:19:26.325772  679972 cri.go:89] found id: ""
	I0210 13:19:26.325785  679972 logs.go:282] 1 containers: [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99]
	I0210 13:19:26.325846  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:26.330921  679972 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:19:26.330996  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:19:26.371037  679972 cri.go:89] found id: "20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:19:26.371067  679972 cri.go:89] found id: ""
	I0210 13:19:26.371079  679972 logs.go:282] 1 containers: [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2]
	I0210 13:19:26.371153  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:26.375147  679972 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:19:26.375241  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:19:26.435003  679972 cri.go:89] found id: "598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:19:26.435024  679972 cri.go:89] found id: "de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:19:26.435030  679972 cri.go:89] found id: ""
	I0210 13:19:26.435040  679972 logs.go:282] 2 containers: [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f]
	I0210 13:19:26.435101  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:26.439524  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:26.443355  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:19:26.443433  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:19:26.478136  679972 cri.go:89] found id: "4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:19:26.478162  679972 cri.go:89] found id: ""
	I0210 13:19:26.478172  679972 logs.go:282] 1 containers: [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae]
	I0210 13:19:26.478237  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:26.483153  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:19:26.483242  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:19:26.528837  679972 cri.go:89] found id: "489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:19:26.528867  679972 cri.go:89] found id: ""
	I0210 13:19:26.528877  679972 logs.go:282] 1 containers: [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0]
	I0210 13:19:26.528937  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:26.533346  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:19:26.533433  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:19:26.570120  679972 cri.go:89] found id: "c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:19:26.570152  679972 cri.go:89] found id: ""
	I0210 13:19:26.570170  679972 logs.go:282] 1 containers: [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79]
	I0210 13:19:26.570239  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:26.574139  679972 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:19:26.574218  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:19:26.613814  679972 cri.go:89] found id: ""
	I0210 13:19:26.613850  679972 logs.go:282] 0 containers: []
	W0210 13:19:26.613862  679972 logs.go:284] No container was found matching "kindnet"
	I0210 13:19:26.613870  679972 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0210 13:19:26.613935  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 13:19:22.903011  687731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:19:22.924792  687731 api_server.go:72] duration metric: took 2.022687423s to wait for apiserver process to appear ...
	I0210 13:19:22.924832  687731 api_server.go:88] waiting for apiserver healthz status ...
	I0210 13:19:22.924862  687731 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8443/healthz ...
	I0210 13:19:25.419606  687731 api_server.go:279] https://192.168.61.97:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 13:19:25.419640  687731 api_server.go:103] status: https://192.168.61.97:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 13:19:25.419655  687731 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8443/healthz ...
	I0210 13:19:25.453143  687731 api_server.go:279] https://192.168.61.97:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 13:19:25.453182  687731 api_server.go:103] status: https://192.168.61.97:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 13:19:25.453199  687731 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8443/healthz ...
	I0210 13:19:25.506495  687731 api_server.go:279] https://192.168.61.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 13:19:25.506529  687731 api_server.go:103] status: https://192.168.61.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 13:19:25.925121  687731 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8443/healthz ...
	I0210 13:19:25.931060  687731 api_server.go:279] https://192.168.61.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 13:19:25.931089  687731 api_server.go:103] status: https://192.168.61.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 13:19:26.425802  687731 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8443/healthz ...
	I0210 13:19:26.434723  687731 api_server.go:279] https://192.168.61.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 13:19:26.434758  687731 api_server.go:103] status: https://192.168.61.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 13:19:26.925196  687731 api_server.go:253] Checking apiserver healthz at https://192.168.61.97:8443/healthz ...
	I0210 13:19:26.930100  687731 api_server.go:279] https://192.168.61.97:8443/healthz returned 200:
	ok
	I0210 13:19:26.937461  687731 api_server.go:141] control plane version: v1.32.1
	I0210 13:19:26.937497  687731 api_server.go:131] duration metric: took 4.012655654s to wait for apiserver health ...
	I0210 13:19:26.937510  687731 cni.go:84] Creating CNI manager for ""
	I0210 13:19:26.937520  687731 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:19:26.939640  687731 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0210 13:19:26.941138  687731 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0210 13:19:26.957476  687731 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0210 13:19:26.975068  687731 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 13:19:26.982623  687731 system_pods.go:59] 8 kube-system pods found
	I0210 13:19:26.982668  687731 system_pods.go:61] "coredns-668d6bf9bc-l7crf" [d6c85faa-7d99-4190-9bd0-5339f638b588] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0210 13:19:26.982678  687731 system_pods.go:61] "etcd-embed-certs-396582" [7d84f4ba-c827-4b10-a00f-87424c80198f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0210 13:19:26.982685  687731 system_pods.go:61] "kube-apiserver-embed-certs-396582" [fa8dcb5d-c6b4-4b63-af04-bdd22efafcd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0210 13:19:26.982691  687731 system_pods.go:61] "kube-controller-manager-embed-certs-396582" [df593e3f-5ba7-4cbb-bc92-b2512090b238] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0210 13:19:26.982696  687731 system_pods.go:61] "kube-proxy-jsm65" [34e1144f-4b4c-4b5f-8011-93f1e1884a51] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0210 13:19:26.982701  687731 system_pods.go:61] "kube-scheduler-embed-certs-396582" [56544e65-0882-47b5-8b9e-484aec8c5849] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0210 13:19:26.982707  687731 system_pods.go:61] "metrics-server-f79f97bbb-97hlp" [d3f29eae-f663-4b26-baf2-3279f239fd1a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 13:19:26.982712  687731 system_pods.go:61] "storage-provisioner" [4e24b6ab-9a55-4790-90d6-06d0c6de782b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0210 13:19:26.982719  687731 system_pods.go:74] duration metric: took 7.616616ms to wait for pod list to return data ...
	I0210 13:19:26.982730  687731 node_conditions.go:102] verifying NodePressure condition ...
	I0210 13:19:26.985694  687731 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 13:19:26.985726  687731 node_conditions.go:123] node cpu capacity is 2
	I0210 13:19:26.985741  687731 node_conditions.go:105] duration metric: took 3.001174ms to run NodePressure ...
	I0210 13:19:26.985763  687731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:19:27.298134  687731 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0210 13:19:27.301864  687731 kubeadm.go:739] kubelet initialised
	I0210 13:19:27.301884  687731 kubeadm.go:740] duration metric: took 3.718908ms waiting for restarted kubelet to initialise ...
	I0210 13:19:27.301893  687731 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 13:19:27.305229  687731 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-l7crf" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:27.311232  687731 pod_ready.go:98] node "embed-certs-396582" hosting pod "coredns-668d6bf9bc-l7crf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-396582" has status "Ready":"False"
	I0210 13:19:27.311258  687731 pod_ready.go:82] duration metric: took 6.002785ms for pod "coredns-668d6bf9bc-l7crf" in "kube-system" namespace to be "Ready" ...
	E0210 13:19:27.311270  687731 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-396582" hosting pod "coredns-668d6bf9bc-l7crf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-396582" has status "Ready":"False"
	I0210 13:19:27.311283  687731 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-396582" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:27.318280  687731 pod_ready.go:98] node "embed-certs-396582" hosting pod "etcd-embed-certs-396582" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-396582" has status "Ready":"False"
	I0210 13:19:27.318302  687731 pod_ready.go:82] duration metric: took 7.008283ms for pod "etcd-embed-certs-396582" in "kube-system" namespace to be "Ready" ...
	E0210 13:19:27.318314  687731 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-396582" hosting pod "etcd-embed-certs-396582" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-396582" has status "Ready":"False"
	I0210 13:19:27.318322  687731 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-396582" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:27.321647  687731 pod_ready.go:98] node "embed-certs-396582" hosting pod "kube-apiserver-embed-certs-396582" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-396582" has status "Ready":"False"
	I0210 13:19:27.321672  687731 pod_ready.go:82] duration metric: took 3.339626ms for pod "kube-apiserver-embed-certs-396582" in "kube-system" namespace to be "Ready" ...
	E0210 13:19:27.321684  687731 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-396582" hosting pod "kube-apiserver-embed-certs-396582" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-396582" has status "Ready":"False"
	I0210 13:19:27.321694  687731 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-396582" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:27.378327  687731 pod_ready.go:98] node "embed-certs-396582" hosting pod "kube-controller-manager-embed-certs-396582" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-396582" has status "Ready":"False"
	I0210 13:19:27.378368  687731 pod_ready.go:82] duration metric: took 56.652371ms for pod "kube-controller-manager-embed-certs-396582" in "kube-system" namespace to be "Ready" ...
	E0210 13:19:27.378382  687731 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-396582" hosting pod "kube-controller-manager-embed-certs-396582" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-396582" has status "Ready":"False"
	I0210 13:19:27.378391  687731 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jsm65" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:27.778798  687731 pod_ready.go:98] node "embed-certs-396582" hosting pod "kube-proxy-jsm65" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-396582" has status "Ready":"False"
	I0210 13:19:27.778829  687731 pod_ready.go:82] duration metric: took 400.427513ms for pod "kube-proxy-jsm65" in "kube-system" namespace to be "Ready" ...
	E0210 13:19:27.778844  687731 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-396582" hosting pod "kube-proxy-jsm65" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-396582" has status "Ready":"False"
	I0210 13:19:27.778853  687731 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-396582" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:28.178090  687731 pod_ready.go:98] node "embed-certs-396582" hosting pod "kube-scheduler-embed-certs-396582" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-396582" has status "Ready":"False"
	I0210 13:19:28.178119  687731 pod_ready.go:82] duration metric: took 399.258918ms for pod "kube-scheduler-embed-certs-396582" in "kube-system" namespace to be "Ready" ...
	E0210 13:19:28.178129  687731 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-396582" hosting pod "kube-scheduler-embed-certs-396582" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-396582" has status "Ready":"False"
	I0210 13:19:28.178143  687731 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-97hlp" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:28.578267  687731 pod_ready.go:98] node "embed-certs-396582" hosting pod "metrics-server-f79f97bbb-97hlp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-396582" has status "Ready":"False"
	I0210 13:19:28.578295  687731 pod_ready.go:82] duration metric: took 400.136987ms for pod "metrics-server-f79f97bbb-97hlp" in "kube-system" namespace to be "Ready" ...
	E0210 13:19:28.578305  687731 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-396582" hosting pod "metrics-server-f79f97bbb-97hlp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-396582" has status "Ready":"False"
	I0210 13:19:28.578313  687731 pod_ready.go:39] duration metric: took 1.27640961s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 13:19:28.578337  687731 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 13:19:28.589914  687731 ops.go:34] apiserver oom_adj: -16
	I0210 13:19:28.589945  687731 kubeadm.go:597] duration metric: took 9.277588873s to restartPrimaryControlPlane
	I0210 13:19:28.589957  687731 kubeadm.go:394] duration metric: took 9.323563108s to StartCluster
	I0210 13:19:28.589979  687731 settings.go:142] acquiring lock: {Name:mk4bd8331d641665e48ff1d1c4382f2e915609be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:19:28.590083  687731 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:19:28.592120  687731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/kubeconfig: {Name:mke7ef1ff4ff1259856291979fdd0337df3a08b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:19:28.592447  687731 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.97 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 13:19:28.592568  687731 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 13:19:28.592650  687731 config.go:182] Loaded profile config "embed-certs-396582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:19:28.592667  687731 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-396582"
	I0210 13:19:28.592688  687731 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-396582"
	I0210 13:19:28.592689  687731 addons.go:69] Setting default-storageclass=true in profile "embed-certs-396582"
	W0210 13:19:28.592701  687731 addons.go:247] addon storage-provisioner should already be in state true
	I0210 13:19:28.592701  687731 addons.go:69] Setting metrics-server=true in profile "embed-certs-396582"
	I0210 13:19:28.592715  687731 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-396582"
	I0210 13:19:28.592727  687731 addons.go:69] Setting dashboard=true in profile "embed-certs-396582"
	I0210 13:19:28.592737  687731 host.go:66] Checking if "embed-certs-396582" exists ...
	I0210 13:19:28.592747  687731 addons.go:238] Setting addon dashboard=true in "embed-certs-396582"
	W0210 13:19:28.592760  687731 addons.go:247] addon dashboard should already be in state true
	I0210 13:19:28.592796  687731 host.go:66] Checking if "embed-certs-396582" exists ...
	I0210 13:19:28.593066  687731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:28.593067  687731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:28.593095  687731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:28.593160  687731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:28.592719  687731 addons.go:238] Setting addon metrics-server=true in "embed-certs-396582"
	I0210 13:19:28.593260  687731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:28.593284  687731 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0210 13:19:28.593283  687731 addons.go:247] addon metrics-server should already be in state true
	I0210 13:19:28.593393  687731 host.go:66] Checking if "embed-certs-396582" exists ...
	I0210 13:19:28.593850  687731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:28.593916  687731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:28.594304  687731 out.go:177] * Verifying Kubernetes components...
	I0210 13:19:28.595770  687731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:19:28.610185  687731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44419
	I0210 13:19:28.610749  687731 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:28.611428  687731 main.go:141] libmachine: Using API Version  1
	I0210 13:19:28.611450  687731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:28.611958  687731 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:28.612581  687731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:28.612621  687731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:28.613699  687731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40823
	I0210 13:19:28.613737  687731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33659
	I0210 13:19:28.613887  687731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43225
	I0210 13:19:28.614118  687731 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:28.614199  687731 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:28.614356  687731 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:28.614542  687731 main.go:141] libmachine: Using API Version  1
	I0210 13:19:28.614563  687731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:28.614849  687731 main.go:141] libmachine: Using API Version  1
	I0210 13:19:28.614876  687731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:28.614921  687731 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:28.614970  687731 main.go:141] libmachine: Using API Version  1
	I0210 13:19:28.615001  687731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:28.615101  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetState
	I0210 13:19:28.615241  687731 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:28.615330  687731 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:28.615769  687731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:28.615812  687731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:28.616368  687731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:28.616401  687731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:28.619185  687731 addons.go:238] Setting addon default-storageclass=true in "embed-certs-396582"
	W0210 13:19:28.619208  687731 addons.go:247] addon default-storageclass should already be in state true
	I0210 13:19:28.619242  687731 host.go:66] Checking if "embed-certs-396582" exists ...
	I0210 13:19:28.619610  687731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:28.619643  687731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:28.630427  687731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33661
	I0210 13:19:28.631985  687731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41163
	I0210 13:19:28.635803  687731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37709
	I0210 13:19:28.657730  687731 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:28.657743  687731 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:28.657817  687731 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:28.658277  687731 main.go:141] libmachine: Using API Version  1
	I0210 13:19:28.658281  687731 main.go:141] libmachine: Using API Version  1
	I0210 13:19:28.658306  687731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:28.658321  687731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:28.658407  687731 main.go:141] libmachine: Using API Version  1
	I0210 13:19:28.658437  687731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:28.658706  687731 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:28.658794  687731 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:28.658836  687731 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:28.658922  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetState
	I0210 13:19:28.658972  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetState
	I0210 13:19:28.659051  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetState
	I0210 13:19:28.661299  687731 main.go:141] libmachine: (embed-certs-396582) Calling .DriverName
	I0210 13:19:28.661791  687731 main.go:141] libmachine: (embed-certs-396582) Calling .DriverName
	I0210 13:19:28.661847  687731 main.go:141] libmachine: (embed-certs-396582) Calling .DriverName
	I0210 13:19:28.663401  687731 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0210 13:19:28.663453  687731 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0210 13:19:28.663502  687731 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:19:28.664699  687731 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0210 13:19:28.664720  687731 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0210 13:19:28.664744  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHHostname
	I0210 13:19:28.665515  687731 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0210 13:19:28.665613  687731 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 13:19:28.665630  687731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 13:19:28.665647  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHHostname
	I0210 13:19:28.666798  687731 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0210 13:19:28.666818  687731 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0210 13:19:28.666837  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHHostname
	I0210 13:19:28.668489  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:28.669573  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:28.669626  687731 main.go:141] libmachine: (embed-certs-396582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:11:43", ip: ""} in network mk-embed-certs-396582: {Iface:virbr4 ExpiryTime:2025-02-10 14:19:04 +0000 UTC Type:0 Mac:52:54:00:df:11:43 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:embed-certs-396582 Clientid:01:52:54:00:df:11:43}
	I0210 13:19:28.669689  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined IP address 192.168.61.97 and MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:28.670109  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHPort
	I0210 13:19:28.670246  687731 main.go:141] libmachine: (embed-certs-396582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:11:43", ip: ""} in network mk-embed-certs-396582: {Iface:virbr4 ExpiryTime:2025-02-10 14:19:04 +0000 UTC Type:0 Mac:52:54:00:df:11:43 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:embed-certs-396582 Clientid:01:52:54:00:df:11:43}
	I0210 13:19:28.670279  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined IP address 192.168.61.97 and MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:28.670311  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHKeyPath
	I0210 13:19:28.670490  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHUsername
	I0210 13:19:28.670563  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:28.670591  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHPort
	I0210 13:19:28.670770  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHKeyPath
	I0210 13:19:28.670793  687731 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/embed-certs-396582/id_rsa Username:docker}
	I0210 13:19:28.670909  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHUsername
	I0210 13:19:28.670975  687731 main.go:141] libmachine: (embed-certs-396582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:11:43", ip: ""} in network mk-embed-certs-396582: {Iface:virbr4 ExpiryTime:2025-02-10 14:19:04 +0000 UTC Type:0 Mac:52:54:00:df:11:43 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:embed-certs-396582 Clientid:01:52:54:00:df:11:43}
	I0210 13:19:28.670997  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined IP address 192.168.61.97 and MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:28.671127  687731 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/embed-certs-396582/id_rsa Username:docker}
	I0210 13:19:28.671224  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHPort
	I0210 13:19:28.671516  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHKeyPath
	I0210 13:19:28.671659  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHUsername
	I0210 13:19:28.671795  687731 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/embed-certs-396582/id_rsa Username:docker}
	I0210 13:19:28.676593  687731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37643
	I0210 13:19:28.677048  687731 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:28.677543  687731 main.go:141] libmachine: Using API Version  1
	I0210 13:19:28.677566  687731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:28.677921  687731 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:28.678445  687731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:28.678492  687731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:28.693795  687731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44837
	I0210 13:19:28.694269  687731 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:28.694788  687731 main.go:141] libmachine: Using API Version  1
	I0210 13:19:28.694817  687731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:28.695131  687731 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:28.695379  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetState
	I0210 13:19:28.697008  687731 main.go:141] libmachine: (embed-certs-396582) Calling .DriverName
	I0210 13:19:28.697267  687731 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 13:19:28.697286  687731 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 13:19:28.697308  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHHostname
	I0210 13:19:28.700601  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:28.701010  687731 main.go:141] libmachine: (embed-certs-396582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:11:43", ip: ""} in network mk-embed-certs-396582: {Iface:virbr4 ExpiryTime:2025-02-10 14:19:04 +0000 UTC Type:0 Mac:52:54:00:df:11:43 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:embed-certs-396582 Clientid:01:52:54:00:df:11:43}
	I0210 13:19:28.701049  687731 main.go:141] libmachine: (embed-certs-396582) DBG | domain embed-certs-396582 has defined IP address 192.168.61.97 and MAC address 52:54:00:df:11:43 in network mk-embed-certs-396582
	I0210 13:19:28.701198  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHPort
	I0210 13:19:28.701382  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHKeyPath
	I0210 13:19:28.701519  687731 main.go:141] libmachine: (embed-certs-396582) Calling .GetSSHUsername
	I0210 13:19:28.701666  687731 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/embed-certs-396582/id_rsa Username:docker}
	I0210 13:19:28.846602  687731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:19:28.864699  687731 node_ready.go:35] waiting up to 6m0s for node "embed-certs-396582" to be "Ready" ...
	I0210 13:19:29.021248  687731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 13:19:29.022142  687731 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0210 13:19:29.022166  687731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0210 13:19:29.043498  687731 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0210 13:19:29.043521  687731 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0210 13:19:29.051607  687731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 13:19:29.064438  687731 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0210 13:19:29.064472  687731 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0210 13:19:29.091262  687731 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0210 13:19:29.091322  687731 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0210 13:19:29.127129  687731 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0210 13:19:29.127163  687731 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0210 13:19:29.138499  687731 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 13:19:29.138529  687731 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0210 13:19:29.171773  687731 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0210 13:19:29.171801  687731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0210 13:19:29.203142  687731 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0210 13:19:29.203175  687731 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0210 13:19:29.220420  687731 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0210 13:19:29.220454  687731 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0210 13:19:29.229781  687731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 13:19:29.272475  687731 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0210 13:19:29.272513  687731 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0210 13:19:29.381577  687731 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0210 13:19:29.381614  687731 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0210 13:19:29.471517  687731 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 13:19:29.471551  687731 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0210 13:19:29.544901  687731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 13:19:30.378921  687731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.357633742s)
	I0210 13:19:30.378977  687731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.327333905s)
	I0210 13:19:30.378991  687731 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:30.379007  687731 main.go:141] libmachine: (embed-certs-396582) Calling .Close
	I0210 13:19:30.379025  687731 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:30.379054  687731 main.go:141] libmachine: (embed-certs-396582) Calling .Close
	I0210 13:19:30.379394  687731 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:30.379432  687731 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:30.379455  687731 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:30.379468  687731 main.go:141] libmachine: (embed-certs-396582) DBG | Closing plugin on server side
	I0210 13:19:30.379473  687731 main.go:141] libmachine: (embed-certs-396582) Calling .Close
	I0210 13:19:30.379598  687731 main.go:141] libmachine: (embed-certs-396582) DBG | Closing plugin on server side
	I0210 13:19:30.379774  687731 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:30.379894  687731 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:30.379810  687731 main.go:141] libmachine: (embed-certs-396582) DBG | Closing plugin on server side
	I0210 13:19:30.381741  687731 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:30.381758  687731 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:30.381768  687731 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:30.381777  687731 main.go:141] libmachine: (embed-certs-396582) Calling .Close
	I0210 13:19:30.382017  687731 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:30.382036  687731 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:30.382072  687731 main.go:141] libmachine: (embed-certs-396582) DBG | Closing plugin on server side
	I0210 13:19:30.395752  687731 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:30.395780  687731 main.go:141] libmachine: (embed-certs-396582) Calling .Close
	I0210 13:19:30.396082  687731 main.go:141] libmachine: (embed-certs-396582) DBG | Closing plugin on server side
	I0210 13:19:30.396215  687731 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:30.396268  687731 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:30.588682  687731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.358837933s)
	I0210 13:19:30.588752  687731 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:30.588768  687731 main.go:141] libmachine: (embed-certs-396582) Calling .Close
	I0210 13:19:30.589169  687731 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:30.589190  687731 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:30.589200  687731 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:30.589207  687731 main.go:141] libmachine: (embed-certs-396582) Calling .Close
	I0210 13:19:30.589588  687731 main.go:141] libmachine: (embed-certs-396582) DBG | Closing plugin on server side
	I0210 13:19:30.589608  687731 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:30.589626  687731 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:30.589651  687731 addons.go:479] Verifying addon metrics-server=true in "embed-certs-396582"
	I0210 13:19:30.850335  687731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.305382678s)
	I0210 13:19:30.850395  687731 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:30.850414  687731 main.go:141] libmachine: (embed-certs-396582) Calling .Close
	I0210 13:19:30.850704  687731 main.go:141] libmachine: (embed-certs-396582) DBG | Closing plugin on server side
	I0210 13:19:30.850754  687731 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:30.850763  687731 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:30.850771  687731 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:30.850776  687731 main.go:141] libmachine: (embed-certs-396582) Calling .Close
	I0210 13:19:30.851029  687731 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:30.851043  687731 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:30.852502  687731 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-396582 addons enable metrics-server
	
	I0210 13:19:30.853838  687731 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0210 13:19:26.494963  687246 pod_ready.go:103] pod "metrics-server-f79f97bbb-r9f86" in "kube-system" namespace has status "Ready":"False"
	I0210 13:19:28.993274  687246 pod_ready.go:103] pod "metrics-server-f79f97bbb-r9f86" in "kube-system" namespace has status "Ready":"False"
	I0210 13:19:30.999604  687246 pod_ready.go:103] pod "metrics-server-f79f97bbb-r9f86" in "kube-system" namespace has status "Ready":"False"
	I0210 13:19:26.649650  679972 cri.go:89] found id: "27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:19:26.649682  679972 cri.go:89] found id: ""
	I0210 13:19:26.649694  679972 logs.go:282] 1 containers: [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b]
	I0210 13:19:26.649758  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:26.653664  679972 logs.go:123] Gathering logs for storage-provisioner [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b] ...
	I0210 13:19:26.653699  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:19:26.685805  679972 logs.go:123] Gathering logs for kubelet ...
	I0210 13:19:26.685849  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:19:26.805694  679972 logs.go:123] Gathering logs for dmesg ...
	I0210 13:19:26.805733  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:19:26.822496  679972 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:19:26.822531  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:19:26.889238  679972 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:19:26.889268  679972 logs.go:123] Gathering logs for kube-apiserver [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99] ...
	I0210 13:19:26.889286  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:19:26.924811  679972 logs.go:123] Gathering logs for etcd [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2] ...
	I0210 13:19:26.924855  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:19:26.969421  679972 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:19:26.969453  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:19:27.262938  679972 logs.go:123] Gathering logs for container status ...
	I0210 13:19:27.262977  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:19:27.318762  679972 logs.go:123] Gathering logs for coredns [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d] ...
	I0210 13:19:27.318792  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:19:27.353828  679972 logs.go:123] Gathering logs for coredns [de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f] ...
	I0210 13:19:27.353864  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:19:27.387056  679972 logs.go:123] Gathering logs for kube-scheduler [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae] ...
	I0210 13:19:27.387091  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:19:27.431230  679972 logs.go:123] Gathering logs for kube-proxy [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0] ...
	I0210 13:19:27.431270  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:19:27.465180  679972 logs.go:123] Gathering logs for kube-controller-manager [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79] ...
	I0210 13:19:27.465217  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:19:30.002281  679972 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8443/healthz ...
	I0210 13:19:30.002938  679972 api_server.go:269] stopped: https://192.168.50.25:8443/healthz: Get "https://192.168.50.25:8443/healthz": dial tcp 192.168.50.25:8443: connect: connection refused
	I0210 13:19:30.002997  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:19:30.003058  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:19:30.041931  679972 cri.go:89] found id: "cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:19:30.041961  679972 cri.go:89] found id: ""
	I0210 13:19:30.041973  679972 logs.go:282] 1 containers: [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99]
	I0210 13:19:30.042042  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:30.046107  679972 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:19:30.046189  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:19:30.090026  679972 cri.go:89] found id: "20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:19:30.090064  679972 cri.go:89] found id: ""
	I0210 13:19:30.090077  679972 logs.go:282] 1 containers: [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2]
	I0210 13:19:30.090154  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:30.095525  679972 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:19:30.095594  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:19:30.140342  679972 cri.go:89] found id: "598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:19:30.140381  679972 cri.go:89] found id: "de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:19:30.140387  679972 cri.go:89] found id: ""
	I0210 13:19:30.140413  679972 logs.go:282] 2 containers: [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f]
	I0210 13:19:30.140488  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:30.145646  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:30.149852  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:19:30.149928  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:19:30.192022  679972 cri.go:89] found id: "4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:19:30.192059  679972 cri.go:89] found id: ""
	I0210 13:19:30.192071  679972 logs.go:282] 1 containers: [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae]
	I0210 13:19:30.192152  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:30.196693  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:19:30.196791  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:19:30.231470  679972 cri.go:89] found id: "489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:19:30.231501  679972 cri.go:89] found id: ""
	I0210 13:19:30.231511  679972 logs.go:282] 1 containers: [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0]
	I0210 13:19:30.231574  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:30.236574  679972 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:19:30.236670  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:19:30.273399  679972 cri.go:89] found id: "c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:19:30.273429  679972 cri.go:89] found id: ""
	I0210 13:19:30.273440  679972 logs.go:282] 1 containers: [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79]
	I0210 13:19:30.273506  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:30.279917  679972 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:19:30.280014  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:19:30.317842  679972 cri.go:89] found id: ""
	I0210 13:19:30.317882  679972 logs.go:282] 0 containers: []
	W0210 13:19:30.317893  679972 logs.go:284] No container was found matching "kindnet"
	I0210 13:19:30.317901  679972 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0210 13:19:30.317968  679972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 13:19:30.357196  679972 cri.go:89] found id: "27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:19:30.357225  679972 cri.go:89] found id: ""
	I0210 13:19:30.357236  679972 logs.go:282] 1 containers: [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b]
	I0210 13:19:30.357302  679972 ssh_runner.go:195] Run: which crictl
	I0210 13:19:30.361129  679972 logs.go:123] Gathering logs for storage-provisioner [27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b] ...
	I0210 13:19:30.361158  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fbbf4567ed637ba5fcc220e7a30e6ae4112b3c8576695f7f9d288b3c461e2b"
	I0210 13:19:30.401859  679972 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:19:30.401893  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:19:30.763979  679972 logs.go:123] Gathering logs for container status ...
	I0210 13:19:30.764018  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:19:30.809423  679972 logs.go:123] Gathering logs for kubelet ...
	I0210 13:19:30.809461  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:19:30.925581  679972 logs.go:123] Gathering logs for dmesg ...
	I0210 13:19:30.925622  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:19:30.941676  679972 logs.go:123] Gathering logs for etcd [20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2] ...
	I0210 13:19:30.941722  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20160adca5607e90c84c8383218ad704de5a5f989ebb40d7aa180da7c1ac9ff2"
	I0210 13:19:30.995716  679972 logs.go:123] Gathering logs for kube-scheduler [4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae] ...
	I0210 13:19:30.995766  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ead52805781732f3cfbd1fa7f211cc02747bbd972dc9ecea60035b9c2f809ae"
	I0210 13:19:31.042832  679972 logs.go:123] Gathering logs for kube-proxy [489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0] ...
	I0210 13:19:31.042867  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 489730fca41c45aafdf96e8fad2af2dd4df11a2818e7d0cf4b02956a5c613aa0"
	I0210 13:19:31.083303  679972 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:19:31.083342  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:19:31.161275  679972 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:19:31.161305  679972 logs.go:123] Gathering logs for kube-apiserver [cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99] ...
	I0210 13:19:31.161325  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc0fad0e28c192a15bcbd2ea0cbd62014f415a10df309d883d47e53249e42b99"
	I0210 13:19:31.204382  679972 logs.go:123] Gathering logs for coredns [598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d] ...
	I0210 13:19:31.204428  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 598f33d31a9580b27d51606559814f76af6d19f0ce8c85565694b0042307e34d"
	I0210 13:19:31.248944  679972 logs.go:123] Gathering logs for coredns [de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f] ...
	I0210 13:19:31.248979  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de4935b1d2c7413b50a3a6bd9cb6b5243f8918b6f3662d005b53ee2d6b5d398f"
	I0210 13:19:31.280754  679972 logs.go:123] Gathering logs for kube-controller-manager [c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79] ...
	I0210 13:19:31.280791  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8aa1561a3ec9eeae9ad70bcf145afbf075ed6989d1e52c4b19164eecdc28e79"
	I0210 13:19:30.855115  687731 addons.go:514] duration metric: took 2.262549915s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0210 13:19:30.867953  687731 node_ready.go:53] node "embed-certs-396582" has status "Ready":"False"
	I0210 13:19:33.493474  687246 pod_ready.go:103] pod "metrics-server-f79f97bbb-r9f86" in "kube-system" namespace has status "Ready":"False"
	I0210 13:19:35.494257  687246 pod_ready.go:103] pod "metrics-server-f79f97bbb-r9f86" in "kube-system" namespace has status "Ready":"False"
	I0210 13:19:33.823643  679972 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8443/healthz ...
	I0210 13:19:33.824375  679972 api_server.go:269] stopped: https://192.168.50.25:8443/healthz: Get "https://192.168.50.25:8443/healthz": dial tcp 192.168.50.25:8443: connect: connection refused
	I0210 13:19:33.824480  679972 kubeadm.go:597] duration metric: took 4m5.640614684s to restartPrimaryControlPlane
	W0210 13:19:33.824569  679972 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0210 13:19:33.824613  679972 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 13:19:32.868304  687731 node_ready.go:53] node "embed-certs-396582" has status "Ready":"False"
	I0210 13:19:34.868677  687731 node_ready.go:53] node "embed-certs-396582" has status "Ready":"False"
	I0210 13:19:36.373544  687731 node_ready.go:49] node "embed-certs-396582" has status "Ready":"True"
	I0210 13:19:36.373577  687731 node_ready.go:38] duration metric: took 7.508844028s for node "embed-certs-396582" to be "Ready" ...
	I0210 13:19:36.373590  687731 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 13:19:36.379083  687731 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-l7crf" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:36.388508  687731 pod_ready.go:93] pod "coredns-668d6bf9bc-l7crf" in "kube-system" namespace has status "Ready":"True"
	I0210 13:19:36.388537  687731 pod_ready.go:82] duration metric: took 9.418848ms for pod "coredns-668d6bf9bc-l7crf" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:36.388558  687731 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-396582" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:37.393815  687731 pod_ready.go:93] pod "etcd-embed-certs-396582" in "kube-system" namespace has status "Ready":"True"
	I0210 13:19:37.393850  687731 pod_ready.go:82] duration metric: took 1.005282168s for pod "etcd-embed-certs-396582" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:37.393868  687731 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-396582" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:37.397385  687731 pod_ready.go:93] pod "kube-apiserver-embed-certs-396582" in "kube-system" namespace has status "Ready":"True"
	I0210 13:19:37.397405  687731 pod_ready.go:82] duration metric: took 3.52854ms for pod "kube-apiserver-embed-certs-396582" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:37.397417  687731 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-396582" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:37.993518  687246 pod_ready.go:103] pod "metrics-server-f79f97bbb-r9f86" in "kube-system" namespace has status "Ready":"False"
	I0210 13:19:39.995463  687246 pod_ready.go:103] pod "metrics-server-f79f97bbb-r9f86" in "kube-system" namespace has status "Ready":"False"
	I0210 13:19:36.687738  679972 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.863088572s)
	I0210 13:19:36.687841  679972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:19:36.702365  679972 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 13:19:36.711826  679972 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:19:36.721235  679972 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:19:36.721269  679972 kubeadm.go:157] found existing configuration files:
	
	I0210 13:19:36.721320  679972 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:19:36.730011  679972 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:19:36.730083  679972 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:19:36.741608  679972 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:19:36.751034  679972 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:19:36.751103  679972 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:19:36.760178  679972 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:19:36.768664  679972 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:19:36.768733  679972 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:19:36.777261  679972 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:19:36.785473  679972 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:19:36.785531  679972 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:19:36.793957  679972 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 13:19:36.943116  679972 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 13:19:37.904893  687731 pod_ready.go:93] pod "kube-controller-manager-embed-certs-396582" in "kube-system" namespace has status "Ready":"True"
	I0210 13:19:37.904920  687731 pod_ready.go:82] duration metric: took 507.495362ms for pod "kube-controller-manager-embed-certs-396582" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:37.904931  687731 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jsm65" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:37.968535  687731 pod_ready.go:93] pod "kube-proxy-jsm65" in "kube-system" namespace has status "Ready":"True"
	I0210 13:19:37.968560  687731 pod_ready.go:82] duration metric: took 63.623456ms for pod "kube-proxy-jsm65" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:37.968570  687731 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-396582" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:38.368821  687731 pod_ready.go:93] pod "kube-scheduler-embed-certs-396582" in "kube-system" namespace has status "Ready":"True"
	I0210 13:19:38.368860  687731 pod_ready.go:82] duration metric: took 400.281651ms for pod "kube-scheduler-embed-certs-396582" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:38.368876  687731 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-f79f97bbb-97hlp" in "kube-system" namespace to be "Ready" ...
	I0210 13:19:40.374797  687731 pod_ready.go:103] pod "metrics-server-f79f97bbb-97hlp" in "kube-system" namespace has status "Ready":"False"
	I0210 13:19:42.443469  687731 pod_ready.go:103] pod "metrics-server-f79f97bbb-97hlp" in "kube-system" namespace has status "Ready":"False"
	I0210 13:19:44.852595  679972 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0210 13:19:44.852676  679972 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 13:19:44.852779  679972 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 13:19:44.852935  679972 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 13:19:44.853058  679972 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0210 13:19:44.853174  679972 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 13:19:44.854419  679972 out.go:235]   - Generating certificates and keys ...
	I0210 13:19:44.854520  679972 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 13:19:44.854606  679972 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 13:19:44.854699  679972 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 13:19:44.854764  679972 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 13:19:44.854860  679972 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 13:19:44.854913  679972 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 13:19:44.854982  679972 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 13:19:44.855069  679972 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 13:19:44.855140  679972 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 13:19:44.855241  679972 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 13:19:44.855324  679972 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 13:19:44.855402  679972 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 13:19:44.855470  679972 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 13:19:44.855562  679972 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0210 13:19:44.855648  679972 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 13:19:44.855739  679972 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 13:19:44.855818  679972 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 13:19:44.855929  679972 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 13:19:44.856017  679972 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 13:19:44.857664  679972 out.go:235]   - Booting up control plane ...
	I0210 13:19:44.857786  679972 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 13:19:44.857898  679972 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 13:19:44.858009  679972 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 13:19:44.858176  679972 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 13:19:44.858332  679972 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 13:19:44.858412  679972 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 13:19:44.858620  679972 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0210 13:19:44.858776  679972 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0210 13:19:44.858852  679972 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.059303ms
	I0210 13:19:44.858922  679972 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0210 13:19:44.859032  679972 kubeadm.go:310] [api-check] The API server is healthy after 5.001390311s
	I0210 13:19:44.859129  679972 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0210 13:19:44.859244  679972 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0210 13:19:44.859349  679972 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0210 13:19:44.859593  679972 kubeadm.go:310] [mark-control-plane] Marking the node kubernetes-upgrade-284631 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0210 13:19:44.859675  679972 kubeadm.go:310] [bootstrap-token] Using token: tntf8v.yzsfdvzilnbvzme8
	I0210 13:19:44.860701  679972 out.go:235]   - Configuring RBAC rules ...
	I0210 13:19:44.860823  679972 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0210 13:19:44.860933  679972 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0210 13:19:44.861092  679972 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0210 13:19:44.861252  679972 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0210 13:19:44.861411  679972 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0210 13:19:44.861556  679972 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0210 13:19:44.861713  679972 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0210 13:19:44.861778  679972 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0210 13:19:44.861864  679972 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0210 13:19:44.861884  679972 kubeadm.go:310] 
	I0210 13:19:44.861970  679972 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0210 13:19:44.861981  679972 kubeadm.go:310] 
	I0210 13:19:44.862081  679972 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0210 13:19:44.862091  679972 kubeadm.go:310] 
	I0210 13:19:44.862128  679972 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0210 13:19:44.862237  679972 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0210 13:19:44.862313  679972 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0210 13:19:44.862322  679972 kubeadm.go:310] 
	I0210 13:19:44.862415  679972 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0210 13:19:44.862436  679972 kubeadm.go:310] 
	I0210 13:19:44.862506  679972 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0210 13:19:44.862515  679972 kubeadm.go:310] 
	I0210 13:19:44.862593  679972 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0210 13:19:44.862703  679972 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0210 13:19:44.862806  679972 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0210 13:19:44.862815  679972 kubeadm.go:310] 
	I0210 13:19:44.862927  679972 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0210 13:19:44.863041  679972 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0210 13:19:44.863053  679972 kubeadm.go:310] 
	I0210 13:19:44.863164  679972 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tntf8v.yzsfdvzilnbvzme8 \
	I0210 13:19:44.863324  679972 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:37d4bace26002796dd310d86a55ac47153684aa943b1e8f0eb361864e8edcaff \
	I0210 13:19:44.863365  679972 kubeadm.go:310] 	--control-plane 
	I0210 13:19:44.863374  679972 kubeadm.go:310] 
	I0210 13:19:44.863514  679972 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0210 13:19:44.863530  679972 kubeadm.go:310] 
	I0210 13:19:44.863627  679972 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tntf8v.yzsfdvzilnbvzme8 \
	I0210 13:19:44.863778  679972 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:37d4bace26002796dd310d86a55ac47153684aa943b1e8f0eb361864e8edcaff 
	I0210 13:19:44.863803  679972 cni.go:84] Creating CNI manager for ""
	I0210 13:19:44.863826  679972 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:19:44.865518  679972 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0210 13:19:44.866779  679972 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0210 13:19:44.882793  679972 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0210 13:19:44.901335  679972 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 13:19:44.901434  679972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 13:19:44.901525  679972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubernetes-upgrade-284631 minikube.k8s.io/updated_at=2025_02_10T13_19_44_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=ef65fd9d75393231710a2bc61f2cab58e3e6ecb2 minikube.k8s.io/name=kubernetes-upgrade-284631 minikube.k8s.io/primary=true
	I0210 13:19:45.086509  679972 ops.go:34] apiserver oom_adj: -16
	I0210 13:19:45.134232  679972 kubeadm.go:1113] duration metric: took 232.873735ms to wait for elevateKubeSystemPrivileges
	I0210 13:19:45.134282  679972 kubeadm.go:394] duration metric: took 4m17.056146289s to StartCluster
	I0210 13:19:45.134312  679972 settings.go:142] acquiring lock: {Name:mk4bd8331d641665e48ff1d1c4382f2e915609be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:19:45.134410  679972 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:19:45.136770  679972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/kubeconfig: {Name:mke7ef1ff4ff1259856291979fdd0337df3a08b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:19:45.137075  679972 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.25 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 13:19:45.137173  679972 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 13:19:45.137291  679972 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-284631"
	I0210 13:19:45.137314  679972 addons.go:238] Setting addon storage-provisioner=true in "kubernetes-upgrade-284631"
	W0210 13:19:45.137334  679972 addons.go:247] addon storage-provisioner should already be in state true
	I0210 13:19:45.137348  679972 config.go:182] Loaded profile config "kubernetes-upgrade-284631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:19:45.137366  679972 host.go:66] Checking if "kubernetes-upgrade-284631" exists ...
	I0210 13:19:45.137372  679972 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-284631"
	I0210 13:19:45.137408  679972 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-284631"
	I0210 13:19:45.137821  679972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:45.137833  679972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:45.137869  679972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:45.137987  679972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:45.138751  679972 out.go:177] * Verifying Kubernetes components...
	I0210 13:19:45.140046  679972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:19:45.156873  679972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36831
	I0210 13:19:45.157384  679972 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:45.157954  679972 main.go:141] libmachine: Using API Version  1
	I0210 13:19:45.157979  679972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:45.158368  679972 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:45.158450  679972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33593
	I0210 13:19:45.158617  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetState
	I0210 13:19:45.158936  679972 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:45.159653  679972 main.go:141] libmachine: Using API Version  1
	I0210 13:19:45.159681  679972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:45.160177  679972 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:45.160828  679972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:45.160876  679972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:45.161959  679972 kapi.go:59] client config for kubernetes-upgrade-284631: &rest.Config{Host:"https://192.168.50.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/client.crt", KeyFile:"/home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kubernetes-upgrade-284631/client.key", CAFile:"/home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24db320), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0210 13:19:45.162435  679972 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-284631"
	W0210 13:19:45.162462  679972 addons.go:247] addon default-storageclass should already be in state true
	I0210 13:19:45.162498  679972 host.go:66] Checking if "kubernetes-upgrade-284631" exists ...
	I0210 13:19:45.162884  679972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:45.162947  679972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:45.182632  679972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37533
	I0210 13:19:45.183185  679972 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:45.183733  679972 main.go:141] libmachine: Using API Version  1
	I0210 13:19:45.183760  679972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:45.184299  679972 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:45.184530  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetState
	I0210 13:19:45.186585  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .DriverName
	I0210 13:19:45.186719  679972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44087
	I0210 13:19:45.187345  679972 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:45.187954  679972 main.go:141] libmachine: Using API Version  1
	I0210 13:19:45.187988  679972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:45.188749  679972 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:19:45.188821  679972 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:45.189510  679972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:19:45.189550  679972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:19:45.190194  679972 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 13:19:45.190214  679972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 13:19:45.190234  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHHostname
	I0210 13:19:45.193272  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:19:45.193809  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:50:79", ip: ""} in network mk-kubernetes-upgrade-284631: {Iface:virbr3 ExpiryTime:2025-02-10 14:13:10 +0000 UTC Type:0 Mac:52:54:00:c8:50:79 Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:kubernetes-upgrade-284631 Clientid:01:52:54:00:c8:50:79}
	I0210 13:19:45.193839  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined IP address 192.168.50.25 and MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:19:45.194108  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHPort
	I0210 13:19:45.194248  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHKeyPath
	I0210 13:19:45.194423  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHUsername
	I0210 13:19:45.194638  679972 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/kubernetes-upgrade-284631/id_rsa Username:docker}
	I0210 13:19:45.212380  679972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35865
	I0210 13:19:45.213006  679972 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:19:45.213679  679972 main.go:141] libmachine: Using API Version  1
	I0210 13:19:45.213709  679972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:19:45.214114  679972 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:19:45.214357  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetState
	I0210 13:19:45.216247  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .DriverName
	I0210 13:19:45.216530  679972 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 13:19:45.216548  679972 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 13:19:45.216566  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHHostname
	I0210 13:19:45.219987  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:19:45.220332  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:50:79", ip: ""} in network mk-kubernetes-upgrade-284631: {Iface:virbr3 ExpiryTime:2025-02-10 14:13:10 +0000 UTC Type:0 Mac:52:54:00:c8:50:79 Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:kubernetes-upgrade-284631 Clientid:01:52:54:00:c8:50:79}
	I0210 13:19:45.220355  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | domain kubernetes-upgrade-284631 has defined IP address 192.168.50.25 and MAC address 52:54:00:c8:50:79 in network mk-kubernetes-upgrade-284631
	I0210 13:19:45.220559  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHPort
	I0210 13:19:45.220756  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHKeyPath
	I0210 13:19:45.220930  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .GetSSHUsername
	I0210 13:19:45.221087  679972 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/kubernetes-upgrade-284631/id_rsa Username:docker}
	I0210 13:19:45.357644  679972 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:19:45.379365  679972 api_server.go:52] waiting for apiserver process to appear ...
	I0210 13:19:45.379453  679972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:19:45.395697  679972 api_server.go:72] duration metric: took 258.559443ms to wait for apiserver process to appear ...
	I0210 13:19:45.395737  679972 api_server.go:88] waiting for apiserver healthz status ...
	I0210 13:19:45.395763  679972 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8443/healthz ...
	I0210 13:19:45.402758  679972 api_server.go:279] https://192.168.50.25:8443/healthz returned 200:
	ok
	I0210 13:19:45.412612  679972 api_server.go:141] control plane version: v1.32.1
	I0210 13:19:45.412669  679972 api_server.go:131] duration metric: took 16.899412ms to wait for apiserver health ...
	I0210 13:19:45.412681  679972 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 13:19:45.420888  679972 system_pods.go:59] 4 kube-system pods found
	I0210 13:19:45.420947  679972 system_pods.go:61] "etcd-kubernetes-upgrade-284631" [0aa512f4-f3b1-449c-9406-631939c1d58e] Running
	I0210 13:19:45.420956  679972 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-284631" [2e7ac8f3-4716-45b0-bc1f-a293a5d93cc0] Running
	I0210 13:19:45.420963  679972 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-284631" [520725db-eaa9-4021-a74f-c002f87539c5] Pending
	I0210 13:19:45.420974  679972 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-284631" [129537f3-dddd-44fb-8594-b1ebf6d1a7cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0210 13:19:45.420983  679972 system_pods.go:74] duration metric: took 8.294063ms to wait for pod list to return data ...
	I0210 13:19:45.421020  679972 kubeadm.go:582] duration metric: took 283.891329ms to wait for: map[apiserver:true system_pods:true]
	I0210 13:19:45.421043  679972 node_conditions.go:102] verifying NodePressure condition ...
	I0210 13:19:45.439460  679972 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 13:19:45.439544  679972 node_conditions.go:123] node cpu capacity is 2
	I0210 13:19:45.439566  679972 node_conditions.go:105] duration metric: took 18.516304ms to run NodePressure ...
	I0210 13:19:45.439581  679972 start.go:241] waiting for startup goroutines ...
	I0210 13:19:45.444491  679972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 13:19:45.544261  679972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 13:19:45.625091  679972 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:45.625143  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .Close
	I0210 13:19:45.626604  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | Closing plugin on server side
	I0210 13:19:45.626608  679972 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:45.626641  679972 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:45.626656  679972 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:45.626663  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .Close
	I0210 13:19:45.626956  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | Closing plugin on server side
	I0210 13:19:45.627001  679972 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:45.627010  679972 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:45.634264  679972 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:45.634294  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .Close
	I0210 13:19:45.634645  679972 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:45.634666  679972 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:46.066812  679972 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:46.066848  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .Close
	I0210 13:19:46.067198  679972 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:46.067225  679972 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:46.067236  679972 main.go:141] libmachine: Making call to close driver server
	I0210 13:19:46.067246  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) Calling .Close
	I0210 13:19:46.067262  679972 main.go:141] libmachine: (kubernetes-upgrade-284631) DBG | Closing plugin on server side
	I0210 13:19:46.067513  679972 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:19:46.067530  679972 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:19:46.069424  679972 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0210 13:19:46.070695  679972 addons.go:514] duration metric: took 933.531262ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0210 13:19:46.070749  679972 start.go:246] waiting for cluster config update ...
	I0210 13:19:46.070764  679972 start.go:255] writing updated cluster config ...
	I0210 13:19:46.071085  679972 ssh_runner.go:195] Run: rm -f paused
	I0210 13:19:46.123115  679972 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0210 13:19:46.124847  679972 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-284631" cluster and "default" namespace by default
	I0210 13:19:41.996239  687246 pod_ready.go:103] pod "metrics-server-f79f97bbb-r9f86" in "kube-system" namespace has status "Ready":"False"
	I0210 13:19:44.496025  687246 pod_ready.go:103] pod "metrics-server-f79f97bbb-r9f86" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Feb 10 13:19:46 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:46.986559253Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739193586986515769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f833c75-9369-4b3a-8ac1-3b80427b4717 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:19:46 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:46.988034995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80c905db-a3c4-4946-9bb1-b7eb4377f3f7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:19:46 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:46.988096718Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80c905db-a3c4-4946-9bb1-b7eb4377f3f7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:19:46 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:46.988220254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fffa20ee947677b915ac0f97fa695441fc6a8368929310517012f62ea67d2b2e,PodSandboxId:67eaff50b67ecab866cb99663d90810ef0e627b477dcff7424c2d1dd8ee6f946,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739193579033843104,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-284631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316c9321b0c12164cc676ba832e342d1,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 6,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88377f1c44bea49c8e9efd632520d6f6784ef191fbc1ec583be72d9ea69630a8,PodSandboxId:5144fc6130921f567fa9e8bdc7ebf3114c7d4c099a0fb0df109c032281e71c52,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739193578985462341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-284631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea38b69cbc13af545a4f64ad1d1a27df,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78f0b891a1005da6a5938c5ea4d189c4de6d1c6acff3f3d834b6cd95b8ce5df6,PodSandboxId:d227fec812de3a0d732ab2ed0c986e520d536120f7c04f25e82b2a6ad7478cd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739193578947426975,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-284631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105f679fd316b3db1365995ad3ada085,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db02ba02a379504721057a468f5b96607f6715bae7eec9357592dc7b394f0008,PodSandboxId:24a051e89cd9acb8ff1066f00f1926cab22a9e2cc35269cc75750b01aa4bbc40,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:6,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739193578862229054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-284631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cf5a6680c4d3b7a8b3add77bd89538b,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 6,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80c905db-a3c4-4946-9bb1-b7eb4377f3f7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:19:47 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:47.021277079Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e80a2ab7-b78b-4bae-b815-6dae37f6f52e name=/runtime.v1.RuntimeService/Version
	Feb 10 13:19:47 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:47.021361107Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e80a2ab7-b78b-4bae-b815-6dae37f6f52e name=/runtime.v1.RuntimeService/Version
	Feb 10 13:19:47 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:47.022753457Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43a02a24-053f-4078-9e03-e83bb7602f0e name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:19:47 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:47.023126043Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739193587023101780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43a02a24-053f-4078-9e03-e83bb7602f0e name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:19:47 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:47.023702371Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d91fbf27-de4f-464d-af5a-6f38d318e719 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:19:47 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:47.023768360Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d91fbf27-de4f-464d-af5a-6f38d318e719 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:19:47 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:47.023889131Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fffa20ee947677b915ac0f97fa695441fc6a8368929310517012f62ea67d2b2e,PodSandboxId:67eaff50b67ecab866cb99663d90810ef0e627b477dcff7424c2d1dd8ee6f946,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739193579033843104,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-284631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316c9321b0c12164cc676ba832e342d1,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 6,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88377f1c44bea49c8e9efd632520d6f6784ef191fbc1ec583be72d9ea69630a8,PodSandboxId:5144fc6130921f567fa9e8bdc7ebf3114c7d4c099a0fb0df109c032281e71c52,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739193578985462341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-284631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea38b69cbc13af545a4f64ad1d1a27df,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78f0b891a1005da6a5938c5ea4d189c4de6d1c6acff3f3d834b6cd95b8ce5df6,PodSandboxId:d227fec812de3a0d732ab2ed0c986e520d536120f7c04f25e82b2a6ad7478cd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739193578947426975,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-284631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105f679fd316b3db1365995ad3ada085,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db02ba02a379504721057a468f5b96607f6715bae7eec9357592dc7b394f0008,PodSandboxId:24a051e89cd9acb8ff1066f00f1926cab22a9e2cc35269cc75750b01aa4bbc40,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:6,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739193578862229054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-284631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cf5a6680c4d3b7a8b3add77bd89538b,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 6,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d91fbf27-de4f-464d-af5a-6f38d318e719 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:19:47 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:47.066234657Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c6ca78cd-6c4c-4195-9c1c-70c2d28a20f7 name=/runtime.v1.RuntimeService/Version
	Feb 10 13:19:47 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:47.066306673Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6ca78cd-6c4c-4195-9c1c-70c2d28a20f7 name=/runtime.v1.RuntimeService/Version
	Feb 10 13:19:47 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:47.067218998Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9dc70f75-9b80-4532-abbc-897ac55a5da4 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:19:47 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:47.067650963Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739193587067625963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9dc70f75-9b80-4532-abbc-897ac55a5da4 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:19:47 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:47.068126123Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=675ad787-cd3b-4b84-becc-4622d27a0d43 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:19:47 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:47.068270314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=675ad787-cd3b-4b84-becc-4622d27a0d43 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:19:47 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:47.068387463Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fffa20ee947677b915ac0f97fa695441fc6a8368929310517012f62ea67d2b2e,PodSandboxId:67eaff50b67ecab866cb99663d90810ef0e627b477dcff7424c2d1dd8ee6f946,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739193579033843104,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-284631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316c9321b0c12164cc676ba832e342d1,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 6,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88377f1c44bea49c8e9efd632520d6f6784ef191fbc1ec583be72d9ea69630a8,PodSandboxId:5144fc6130921f567fa9e8bdc7ebf3114c7d4c099a0fb0df109c032281e71c52,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739193578985462341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-284631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea38b69cbc13af545a4f64ad1d1a27df,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78f0b891a1005da6a5938c5ea4d189c4de6d1c6acff3f3d834b6cd95b8ce5df6,PodSandboxId:d227fec812de3a0d732ab2ed0c986e520d536120f7c04f25e82b2a6ad7478cd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739193578947426975,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-284631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105f679fd316b3db1365995ad3ada085,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db02ba02a379504721057a468f5b96607f6715bae7eec9357592dc7b394f0008,PodSandboxId:24a051e89cd9acb8ff1066f00f1926cab22a9e2cc35269cc75750b01aa4bbc40,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:6,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739193578862229054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-284631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cf5a6680c4d3b7a8b3add77bd89538b,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 6,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=675ad787-cd3b-4b84-becc-4622d27a0d43 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:19:47 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:47.109309838Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cf145ffe-f629-47fe-8de7-e9f379f09c07 name=/runtime.v1.RuntimeService/Version
	Feb 10 13:19:47 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:47.109383580Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf145ffe-f629-47fe-8de7-e9f379f09c07 name=/runtime.v1.RuntimeService/Version
	Feb 10 13:19:47 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:47.111691466Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50941f51-91c6-4f70-8780-a6b16b411e94 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:19:47 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:47.112233151Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739193587112195521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50941f51-91c6-4f70-8780-a6b16b411e94 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:19:47 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:47.112854371Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f383ec6e-34e5-4633-9a89-1a95f87c3acb name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:19:47 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:47.112968710Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f383ec6e-34e5-4633-9a89-1a95f87c3acb name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:19:47 kubernetes-upgrade-284631 crio[3194]: time="2025-02-10 13:19:47.113291593Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fffa20ee947677b915ac0f97fa695441fc6a8368929310517012f62ea67d2b2e,PodSandboxId:67eaff50b67ecab866cb99663d90810ef0e627b477dcff7424c2d1dd8ee6f946,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739193579033843104,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-284631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316c9321b0c12164cc676ba832e342d1,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 6,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88377f1c44bea49c8e9efd632520d6f6784ef191fbc1ec583be72d9ea69630a8,PodSandboxId:5144fc6130921f567fa9e8bdc7ebf3114c7d4c099a0fb0df109c032281e71c52,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739193578985462341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-284631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea38b69cbc13af545a4f64ad1d1a27df,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78f0b891a1005da6a5938c5ea4d189c4de6d1c6acff3f3d834b6cd95b8ce5df6,PodSandboxId:d227fec812de3a0d732ab2ed0c986e520d536120f7c04f25e82b2a6ad7478cd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739193578947426975,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-284631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105f679fd316b3db1365995ad3ada085,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db02ba02a379504721057a468f5b96607f6715bae7eec9357592dc7b394f0008,PodSandboxId:24a051e89cd9acb8ff1066f00f1926cab22a9e2cc35269cc75750b01aa4bbc40,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:6,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739193578862229054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-284631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cf5a6680c4d3b7a8b3add77bd89538b,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 6,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f383ec6e-34e5-4633-9a89-1a95f87c3acb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fffa20ee94767       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   8 seconds ago       Running             kube-apiserver            6                   67eaff50b67ec       kube-apiserver-kubernetes-upgrade-284631
	88377f1c44bea       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   8 seconds ago       Running             kube-scheduler            1                   5144fc6130921       kube-scheduler-kubernetes-upgrade-284631
	78f0b891a1005       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   8 seconds ago       Running             etcd                      1                   d227fec812de3       etcd-kubernetes-upgrade-284631
	db02ba02a3795       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   8 seconds ago       Running             kube-controller-manager   6                   24a051e89cd9a       kube-controller-manager-kubernetes-upgrade-284631
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-284631
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-284631
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef65fd9d75393231710a2bc61f2cab58e3e6ecb2
	                    minikube.k8s.io/name=kubernetes-upgrade-284631
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_10T13_19_44_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 13:19:41 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-284631
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 13:19:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 13:19:41 +0000   Mon, 10 Feb 2025 13:19:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 13:19:41 +0000   Mon, 10 Feb 2025 13:19:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 13:19:41 +0000   Mon, 10 Feb 2025 13:19:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 13:19:41 +0000   Mon, 10 Feb 2025 13:19:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.25
	  Hostname:    kubernetes-upgrade-284631
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbc5b863fbbf4cd3a11dab084adfc8f7
	  System UUID:                bbc5b863-fbbf-4cd3-a11d-ab084adfc8f7
	  Boot ID:                    c2748ee9-15f4-493f-a6e4-377ac42b30d9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-284631                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3s
	  kube-system                 kube-apiserver-kubernetes-upgrade-284631             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-284631    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-kubernetes-upgrade-284631             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             100Mi (4%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  Starting                 9s               kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)  kubelet  Node kubernetes-upgrade-284631 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)  kubelet  Node kubernetes-upgrade-284631 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)  kubelet  Node kubernetes-upgrade-284631 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s               kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 3s               kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s               kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s               kubelet  Node kubernetes-upgrade-284631 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s               kubelet  Node kubernetes-upgrade-284631 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s               kubelet  Node kubernetes-upgrade-284631 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +8.705577] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.065673] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063075] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.189120] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.159849] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.305527] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +4.922541] systemd-fstab-generator[720]: Ignoring "noauto" option for root device
	[  +0.062397] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.794345] systemd-fstab-generator[843]: Ignoring "noauto" option for root device
	[ +11.653414] systemd-fstab-generator[1259]: Ignoring "noauto" option for root device
	[  +0.115102] kauditd_printk_skb: 97 callbacks suppressed
	[ +13.913369] kauditd_printk_skb: 107 callbacks suppressed
	[  +0.549492] systemd-fstab-generator[2542]: Ignoring "noauto" option for root device
	[  +0.239518] systemd-fstab-generator[2569]: Ignoring "noauto" option for root device
	[  +0.396095] systemd-fstab-generator[2739]: Ignoring "noauto" option for root device
	[  +0.477170] systemd-fstab-generator[2930]: Ignoring "noauto" option for root device
	[  +0.779687] systemd-fstab-generator[3031]: Ignoring "noauto" option for root device
	[Feb10 13:15] systemd-fstab-generator[3340]: Ignoring "noauto" option for root device
	[  +0.082177] kauditd_printk_skb: 199 callbacks suppressed
	[  +2.384659] systemd-fstab-generator[3863]: Ignoring "noauto" option for root device
	[ +21.676557] kauditd_printk_skb: 109 callbacks suppressed
	[Feb10 13:19] systemd-fstab-generator[9291]: Ignoring "noauto" option for root device
	[  +6.064045] systemd-fstab-generator[9621]: Ignoring "noauto" option for root device
	[  +0.098930] kauditd_printk_skb: 70 callbacks suppressed
	[  +1.252143] systemd-fstab-generator[9708]: Ignoring "noauto" option for root device
	
	
	==> etcd [78f0b891a1005da6a5938c5ea4d189c4de6d1c6acff3f3d834b6cd95b8ce5df6] <==
	{"level":"info","ts":"2025-02-10T13:19:39.207217Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-10T13:19:39.207549Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"9bed631aec89f51c","initial-advertise-peer-urls":["https://192.168.50.25:2380"],"listen-peer-urls":["https://192.168.50.25:2380"],"advertise-client-urls":["https://192.168.50.25:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.25:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-02-10T13:19:39.207575Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-10T13:19:39.207648Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.50.25:2380"}
	{"level":"info","ts":"2025-02-10T13:19:39.207660Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.50.25:2380"}
	{"level":"info","ts":"2025-02-10T13:19:39.757562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9bed631aec89f51c is starting a new election at term 1"}
	{"level":"info","ts":"2025-02-10T13:19:39.757676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9bed631aec89f51c became pre-candidate at term 1"}
	{"level":"info","ts":"2025-02-10T13:19:39.757706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9bed631aec89f51c received MsgPreVoteResp from 9bed631aec89f51c at term 1"}
	{"level":"info","ts":"2025-02-10T13:19:39.757749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9bed631aec89f51c became candidate at term 2"}
	{"level":"info","ts":"2025-02-10T13:19:39.757775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9bed631aec89f51c received MsgVoteResp from 9bed631aec89f51c at term 2"}
	{"level":"info","ts":"2025-02-10T13:19:39.757799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9bed631aec89f51c became leader at term 2"}
	{"level":"info","ts":"2025-02-10T13:19:39.757818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9bed631aec89f51c elected leader 9bed631aec89f51c at term 2"}
	{"level":"info","ts":"2025-02-10T13:19:39.763534Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T13:19:39.764008Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"9bed631aec89f51c","local-member-attributes":"{Name:kubernetes-upgrade-284631 ClientURLs:[https://192.168.50.25:2379]}","request-path":"/0/members/9bed631aec89f51c/attributes","cluster-id":"c83bdfd763dc36e2","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-10T13:19:39.764517Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T13:19:39.764978Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T13:19:39.765069Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c83bdfd763dc36e2","local-member-id":"9bed631aec89f51c","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T13:19:39.765215Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T13:19:39.767546Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T13:19:39.768094Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-10T13:19:39.777586Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-10T13:19:39.777670Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-10T13:19:39.777907Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.25:2379"}
	{"level":"info","ts":"2025-02-10T13:19:39.778119Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-10T13:19:39.780578Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 13:19:47 up 6 min,  0 users,  load average: 0.37, 0.18, 0.08
	Linux kubernetes-upgrade-284631 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [fffa20ee947677b915ac0f97fa695441fc6a8368929310517012f62ea67d2b2e] <==
	I0210 13:19:41.441095       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0210 13:19:41.442246       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0210 13:19:41.442670       1 aggregator.go:171] initial CRD sync complete...
	I0210 13:19:41.442720       1 autoregister_controller.go:144] Starting autoregister controller
	I0210 13:19:41.442740       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0210 13:19:41.442756       1 cache.go:39] Caches are synced for autoregister controller
	I0210 13:19:41.445311       1 shared_informer.go:320] Caches are synced for configmaps
	I0210 13:19:41.446423       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0210 13:19:41.446568       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0210 13:19:41.449743       1 policy_source.go:240] refreshing policies
	I0210 13:19:41.451225       1 controller.go:615] quota admission added evaluator for: namespaces
	I0210 13:19:41.475601       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0210 13:19:42.359344       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0210 13:19:42.422011       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0210 13:19:42.422043       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0210 13:19:43.062291       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0210 13:19:43.107085       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0210 13:19:43.165260       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0210 13:19:43.172537       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.25]
	I0210 13:19:43.173847       1 controller.go:615] quota admission added evaluator for: endpoints
	I0210 13:19:43.178425       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0210 13:19:43.445036       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0210 13:19:44.245195       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0210 13:19:44.285419       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0210 13:19:44.295728       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [db02ba02a379504721057a468f5b96607f6715bae7eec9357592dc7b394f0008] <==
	W0210 13:19:46.390001       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0210 13:19:46.394147       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0210 13:19:46.394377       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0210 13:19:46.394586       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0210 13:19:46.394609       1 shared_informer.go:313] Waiting for caches to sync for node
	I0210 13:19:46.541337       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0210 13:19:46.541581       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0210 13:19:46.541642       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0210 13:19:46.688445       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0210 13:19:46.688599       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0210 13:19:46.688754       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0210 13:19:46.986929       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0210 13:19:46.987013       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0210 13:19:46.987023       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0210 13:19:47.139619       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0210 13:19:47.139761       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0210 13:19:47.139793       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0210 13:19:47.324707       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0210 13:19:47.324827       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0210 13:19:47.324838       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0210 13:19:47.337515       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0210 13:19:47.337534       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0210 13:19:47.337602       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0210 13:19:47.337633       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0210 13:19:47.337665       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	
	
	==> kube-scheduler [88377f1c44bea49c8e9efd632520d6f6784ef191fbc1ec583be72d9ea69630a8] <==
	W0210 13:19:41.463299       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0210 13:19:41.463333       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 13:19:42.345804       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0210 13:19:42.345861       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 13:19:42.359734       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0210 13:19:42.359766       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0210 13:19:42.373815       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0210 13:19:42.373907       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 13:19:42.437427       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0210 13:19:42.437535       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 13:19:42.452802       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0210 13:19:42.452984       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 13:19:42.475807       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0210 13:19:42.475899       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 13:19:42.526745       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0210 13:19:42.526842       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 13:19:42.708101       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0210 13:19:42.708225       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 13:19:42.807189       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0210 13:19:42.807294       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0210 13:19:42.840889       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0210 13:19:42.841121       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 13:19:42.841723       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0210 13:19:42.842370       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 13:19:45.246176       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 10 13:19:44 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:44.373864    9628 kubelet_node_status.go:125] "Node was previously registered" node="kubernetes-upgrade-284631"
	Feb 10 13:19:44 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:44.374069    9628 kubelet_node_status.go:79] "Successfully registered node" node="kubernetes-upgrade-284631"
	Feb 10 13:19:44 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:44.447300    9628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/316c9321b0c12164cc676ba832e342d1-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-284631\" (UID: \"316c9321b0c12164cc676ba832e342d1\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-284631"
	Feb 10 13:19:44 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:44.447565    9628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/316c9321b0c12164cc676ba832e342d1-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-284631\" (UID: \"316c9321b0c12164cc676ba832e342d1\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-284631"
	Feb 10 13:19:44 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:44.447654    9628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8cf5a6680c4d3b7a8b3add77bd89538b-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-284631\" (UID: \"8cf5a6680c4d3b7a8b3add77bd89538b\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-284631"
	Feb 10 13:19:44 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:44.447731    9628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8cf5a6680c4d3b7a8b3add77bd89538b-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-284631\" (UID: \"8cf5a6680c4d3b7a8b3add77bd89538b\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-284631"
	Feb 10 13:19:44 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:44.447799    9628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8cf5a6680c4d3b7a8b3add77bd89538b-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-284631\" (UID: \"8cf5a6680c4d3b7a8b3add77bd89538b\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-284631"
	Feb 10 13:19:44 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:44.447872    9628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8cf5a6680c4d3b7a8b3add77bd89538b-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-284631\" (UID: \"8cf5a6680c4d3b7a8b3add77bd89538b\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-284631"
	Feb 10 13:19:44 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:44.447936    9628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/105f679fd316b3db1365995ad3ada085-etcd-certs\") pod \"etcd-kubernetes-upgrade-284631\" (UID: \"105f679fd316b3db1365995ad3ada085\") " pod="kube-system/etcd-kubernetes-upgrade-284631"
	Feb 10 13:19:44 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:44.448022    9628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/316c9321b0c12164cc676ba832e342d1-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-284631\" (UID: \"316c9321b0c12164cc676ba832e342d1\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-284631"
	Feb 10 13:19:44 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:44.448134    9628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8cf5a6680c4d3b7a8b3add77bd89538b-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-284631\" (UID: \"8cf5a6680c4d3b7a8b3add77bd89538b\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-284631"
	Feb 10 13:19:44 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:44.448288    9628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea38b69cbc13af545a4f64ad1d1a27df-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-284631\" (UID: \"ea38b69cbc13af545a4f64ad1d1a27df\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-284631"
	Feb 10 13:19:44 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:44.448363    9628 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/105f679fd316b3db1365995ad3ada085-etcd-data\") pod \"etcd-kubernetes-upgrade-284631\" (UID: \"105f679fd316b3db1365995ad3ada085\") " pod="kube-system/etcd-kubernetes-upgrade-284631"
	Feb 10 13:19:45 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:45.131827    9628 apiserver.go:52] "Watching apiserver"
	Feb 10 13:19:45 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:45.146582    9628 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Feb 10 13:19:45 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:45.258366    9628 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-kubernetes-upgrade-284631"
	Feb 10 13:19:45 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:45.258725    9628 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-kubernetes-upgrade-284631"
	Feb 10 13:19:45 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:45.258779    9628 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-kubernetes-upgrade-284631"
	Feb 10 13:19:45 kubernetes-upgrade-284631 kubelet[9628]: E0210 13:19:45.275224    9628 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-kubernetes-upgrade-284631\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-284631"
	Feb 10 13:19:45 kubernetes-upgrade-284631 kubelet[9628]: E0210 13:19:45.279431    9628 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-kubernetes-upgrade-284631\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-284631"
	Feb 10 13:19:45 kubernetes-upgrade-284631 kubelet[9628]: E0210 13:19:45.279742    9628 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-kubernetes-upgrade-284631\" already exists" pod="kube-system/etcd-kubernetes-upgrade-284631"
	Feb 10 13:19:45 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:45.387748    9628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-kubernetes-upgrade-284631" podStartSLOduration=1.387710206 podStartE2EDuration="1.387710206s" podCreationTimestamp="2025-02-10 13:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-10 13:19:45.319749973 +0000 UTC m=+1.291813637" watchObservedRunningTime="2025-02-10 13:19:45.387710206 +0000 UTC m=+1.359773862"
	Feb 10 13:19:45 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:45.387895    9628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-kubernetes-upgrade-284631" podStartSLOduration=1.387888257 podStartE2EDuration="1.387888257s" podCreationTimestamp="2025-02-10 13:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-10 13:19:45.378869573 +0000 UTC m=+1.350933238" watchObservedRunningTime="2025-02-10 13:19:45.387888257 +0000 UTC m=+1.359951921"
	Feb 10 13:19:45 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:45.434357    9628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-kubernetes-upgrade-284631" podStartSLOduration=1.434338241 podStartE2EDuration="1.434338241s" podCreationTimestamp="2025-02-10 13:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-10 13:19:45.430589336 +0000 UTC m=+1.402653000" watchObservedRunningTime="2025-02-10 13:19:45.434338241 +0000 UTC m=+1.406401897"
	Feb 10 13:19:45 kubernetes-upgrade-284631 kubelet[9628]: I0210 13:19:45.434560    9628 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-kubernetes-upgrade-284631" podStartSLOduration=1.434553296 podStartE2EDuration="1.434553296s" podCreationTimestamp="2025-02-10 13:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-10 13:19:45.40388519 +0000 UTC m=+1.375948854" watchObservedRunningTime="2025-02-10 13:19:45.434553296 +0000 UTC m=+1.406616960"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-284631 -n kubernetes-upgrade-284631
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-284631 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-284631 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-284631 describe pod storage-provisioner: exit status 1 (67.163023ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-284631 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-284631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-284631
E0210 13:19:49.275236  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-284631: (1.152432214s)
--- FAIL: TestKubernetesUpgrade (729.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (269.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-745712 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-745712 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m29.346179798s)

                                                
                                                
-- stdout --
	* [old-k8s-version-745712] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20383
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-745712" primary control-plane node in "old-k8s-version-745712" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 13:14:20.242678  682172 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:14:20.242810  682172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:14:20.242820  682172 out.go:358] Setting ErrFile to fd 2...
	I0210 13:14:20.242824  682172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:14:20.242983  682172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
	I0210 13:14:20.243571  682172 out.go:352] Setting JSON to false
	I0210 13:14:20.244715  682172 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":17810,"bootTime":1739175450,"procs":308,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 13:14:20.244819  682172 start.go:139] virtualization: kvm guest
	I0210 13:14:20.247011  682172 out.go:177] * [old-k8s-version-745712] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 13:14:20.248248  682172 out.go:177]   - MINIKUBE_LOCATION=20383
	I0210 13:14:20.248244  682172 notify.go:220] Checking for updates...
	I0210 13:14:20.250797  682172 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 13:14:20.252059  682172 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:14:20.253309  682172 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	I0210 13:14:20.254486  682172 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 13:14:20.255729  682172 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 13:14:20.257593  682172 config.go:182] Loaded profile config "bridge-651187": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:14:20.257717  682172 config.go:182] Loaded profile config "flannel-651187": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:14:20.257810  682172 config.go:182] Loaded profile config "kubernetes-upgrade-284631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:14:20.257918  682172 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 13:14:20.297746  682172 out.go:177] * Using the kvm2 driver based on user configuration
	I0210 13:14:20.298986  682172 start.go:297] selected driver: kvm2
	I0210 13:14:20.299013  682172 start.go:901] validating driver "kvm2" against <nil>
	I0210 13:14:20.299036  682172 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 13:14:20.300091  682172 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:14:20.300204  682172 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20383-625153/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 13:14:20.315990  682172 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 13:14:20.316057  682172 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 13:14:20.316329  682172 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 13:14:20.316363  682172 cni.go:84] Creating CNI manager for ""
	I0210 13:14:20.316408  682172 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:14:20.316415  682172 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0210 13:14:20.316472  682172 start.go:340] cluster config:
	{Name:old-k8s-version-745712 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-745712 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:14:20.316572  682172 iso.go:125] acquiring lock: {Name:mk013d189757e85c0699014f5ef29205d9d4927f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:14:20.318323  682172 out.go:177] * Starting "old-k8s-version-745712" primary control-plane node in "old-k8s-version-745712" cluster
	I0210 13:14:20.319455  682172 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 13:14:20.319505  682172 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0210 13:14:20.319517  682172 cache.go:56] Caching tarball of preloaded images
	I0210 13:14:20.319584  682172 preload.go:172] Found /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 13:14:20.319594  682172 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0210 13:14:20.319681  682172 profile.go:143] Saving config to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/config.json ...
	I0210 13:14:20.319697  682172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/config.json: {Name:mk3f760f29178eba4fe697633ce4080f01825600 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:14:20.319836  682172 start.go:360] acquireMachinesLock for old-k8s-version-745712: {Name:mk28e87da66de739a4c7c70d1fb5afc4ce31a4d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 13:14:20.319888  682172 start.go:364] duration metric: took 30.034µs to acquireMachinesLock for "old-k8s-version-745712"
	I0210 13:14:20.319908  682172 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-745712 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-versi
on-745712 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 13:14:20.319991  682172 start.go:125] createHost starting for "" (driver="kvm2")
	I0210 13:14:20.321480  682172 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0210 13:14:20.321672  682172 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:14:20.321727  682172 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:14:20.337877  682172 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I0210 13:14:20.338355  682172 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:14:20.338980  682172 main.go:141] libmachine: Using API Version  1
	I0210 13:14:20.339006  682172 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:14:20.339334  682172 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:14:20.339489  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetMachineName
	I0210 13:14:20.339596  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .DriverName
	I0210 13:14:20.339749  682172 start.go:159] libmachine.API.Create for "old-k8s-version-745712" (driver="kvm2")
	I0210 13:14:20.339790  682172 client.go:168] LocalClient.Create starting
	I0210 13:14:20.339827  682172 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem
	I0210 13:14:20.339873  682172 main.go:141] libmachine: Decoding PEM data...
	I0210 13:14:20.339897  682172 main.go:141] libmachine: Parsing certificate...
	I0210 13:14:20.339968  682172 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem
	I0210 13:14:20.339997  682172 main.go:141] libmachine: Decoding PEM data...
	I0210 13:14:20.340014  682172 main.go:141] libmachine: Parsing certificate...
	I0210 13:14:20.340039  682172 main.go:141] libmachine: Running pre-create checks...
	I0210 13:14:20.340051  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .PreCreateCheck
	I0210 13:14:20.340360  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetConfigRaw
	I0210 13:14:20.340771  682172 main.go:141] libmachine: Creating machine...
	I0210 13:14:20.340787  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .Create
	I0210 13:14:20.340921  682172 main.go:141] libmachine: (old-k8s-version-745712) creating KVM machine...
	I0210 13:14:20.340931  682172 main.go:141] libmachine: (old-k8s-version-745712) creating network...
	I0210 13:14:20.342163  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | found existing default KVM network
	I0210 13:14:20.343336  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:20.343160  682195 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:9f:e2:e9} reservation:<nil>}
	I0210 13:14:20.344041  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:20.343976  682195 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:e2:da:dc} reservation:<nil>}
	I0210 13:14:20.344946  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:20.344855  682195 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:96:8d:23} reservation:<nil>}
	I0210 13:14:20.346087  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:20.346004  682195 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003891c0}
	I0210 13:14:20.346144  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | created network xml: 
	I0210 13:14:20.346164  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | <network>
	I0210 13:14:20.346179  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG |   <name>mk-old-k8s-version-745712</name>
	I0210 13:14:20.346186  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG |   <dns enable='no'/>
	I0210 13:14:20.346195  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG |   
	I0210 13:14:20.346212  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0210 13:14:20.346225  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG |     <dhcp>
	I0210 13:14:20.346236  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0210 13:14:20.346247  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG |     </dhcp>
	I0210 13:14:20.346261  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG |   </ip>
	I0210 13:14:20.346304  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG |   
	I0210 13:14:20.346334  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | </network>
	I0210 13:14:20.346348  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | 
	I0210 13:14:20.351727  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | trying to create private KVM network mk-old-k8s-version-745712 192.168.72.0/24...
	I0210 13:14:20.429237  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | private KVM network mk-old-k8s-version-745712 192.168.72.0/24 created
	I0210 13:14:20.429364  682172 main.go:141] libmachine: (old-k8s-version-745712) setting up store path in /home/jenkins/minikube-integration/20383-625153/.minikube/machines/old-k8s-version-745712 ...
	I0210 13:14:20.429538  682172 main.go:141] libmachine: (old-k8s-version-745712) building disk image from file:///home/jenkins/minikube-integration/20383-625153/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0210 13:14:20.429690  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:20.429603  682195 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20383-625153/.minikube
	I0210 13:14:20.429854  682172 main.go:141] libmachine: (old-k8s-version-745712) Downloading /home/jenkins/minikube-integration/20383-625153/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20383-625153/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0210 13:14:20.723885  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:20.723735  682195 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/old-k8s-version-745712/id_rsa...
	I0210 13:14:20.990153  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:20.990021  682195 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/old-k8s-version-745712/old-k8s-version-745712.rawdisk...
	I0210 13:14:20.990190  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | Writing magic tar header
	I0210 13:14:20.990206  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | Writing SSH key tar header
	I0210 13:14:20.990222  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:20.990133  682195 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20383-625153/.minikube/machines/old-k8s-version-745712 ...
	I0210 13:14:20.990238  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/old-k8s-version-745712
	I0210 13:14:20.990262  682172 main.go:141] libmachine: (old-k8s-version-745712) setting executable bit set on /home/jenkins/minikube-integration/20383-625153/.minikube/machines/old-k8s-version-745712 (perms=drwx------)
	I0210 13:14:20.990289  682172 main.go:141] libmachine: (old-k8s-version-745712) setting executable bit set on /home/jenkins/minikube-integration/20383-625153/.minikube/machines (perms=drwxr-xr-x)
	I0210 13:14:20.990305  682172 main.go:141] libmachine: (old-k8s-version-745712) setting executable bit set on /home/jenkins/minikube-integration/20383-625153/.minikube (perms=drwxr-xr-x)
	I0210 13:14:20.990322  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20383-625153/.minikube/machines
	I0210 13:14:20.990337  682172 main.go:141] libmachine: (old-k8s-version-745712) setting executable bit set on /home/jenkins/minikube-integration/20383-625153 (perms=drwxrwxr-x)
	I0210 13:14:20.990351  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20383-625153/.minikube
	I0210 13:14:20.990367  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20383-625153
	I0210 13:14:20.990381  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0210 13:14:20.990407  682172 main.go:141] libmachine: (old-k8s-version-745712) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0210 13:14:20.990420  682172 main.go:141] libmachine: (old-k8s-version-745712) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0210 13:14:20.990426  682172 main.go:141] libmachine: (old-k8s-version-745712) creating domain...
	I0210 13:14:20.990434  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | checking permissions on dir: /home/jenkins
	I0210 13:14:20.990442  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | checking permissions on dir: /home
	I0210 13:14:20.990450  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | skipping /home - not owner
	I0210 13:14:20.991642  682172 main.go:141] libmachine: (old-k8s-version-745712) define libvirt domain using xml: 
	I0210 13:14:20.991679  682172 main.go:141] libmachine: (old-k8s-version-745712) <domain type='kvm'>
	I0210 13:14:20.991690  682172 main.go:141] libmachine: (old-k8s-version-745712)   <name>old-k8s-version-745712</name>
	I0210 13:14:20.991702  682172 main.go:141] libmachine: (old-k8s-version-745712)   <memory unit='MiB'>2200</memory>
	I0210 13:14:20.991714  682172 main.go:141] libmachine: (old-k8s-version-745712)   <vcpu>2</vcpu>
	I0210 13:14:20.991733  682172 main.go:141] libmachine: (old-k8s-version-745712)   <features>
	I0210 13:14:20.991741  682172 main.go:141] libmachine: (old-k8s-version-745712)     <acpi/>
	I0210 13:14:20.991745  682172 main.go:141] libmachine: (old-k8s-version-745712)     <apic/>
	I0210 13:14:20.991752  682172 main.go:141] libmachine: (old-k8s-version-745712)     <pae/>
	I0210 13:14:20.991756  682172 main.go:141] libmachine: (old-k8s-version-745712)     
	I0210 13:14:20.991761  682172 main.go:141] libmachine: (old-k8s-version-745712)   </features>
	I0210 13:14:20.991768  682172 main.go:141] libmachine: (old-k8s-version-745712)   <cpu mode='host-passthrough'>
	I0210 13:14:20.991827  682172 main.go:141] libmachine: (old-k8s-version-745712)   
	I0210 13:14:20.991865  682172 main.go:141] libmachine: (old-k8s-version-745712)   </cpu>
	I0210 13:14:20.991881  682172 main.go:141] libmachine: (old-k8s-version-745712)   <os>
	I0210 13:14:20.991894  682172 main.go:141] libmachine: (old-k8s-version-745712)     <type>hvm</type>
	I0210 13:14:20.991908  682172 main.go:141] libmachine: (old-k8s-version-745712)     <boot dev='cdrom'/>
	I0210 13:14:20.991920  682172 main.go:141] libmachine: (old-k8s-version-745712)     <boot dev='hd'/>
	I0210 13:14:20.991933  682172 main.go:141] libmachine: (old-k8s-version-745712)     <bootmenu enable='no'/>
	I0210 13:14:20.991952  682172 main.go:141] libmachine: (old-k8s-version-745712)   </os>
	I0210 13:14:20.991963  682172 main.go:141] libmachine: (old-k8s-version-745712)   <devices>
	I0210 13:14:20.991972  682172 main.go:141] libmachine: (old-k8s-version-745712)     <disk type='file' device='cdrom'>
	I0210 13:14:20.991990  682172 main.go:141] libmachine: (old-k8s-version-745712)       <source file='/home/jenkins/minikube-integration/20383-625153/.minikube/machines/old-k8s-version-745712/boot2docker.iso'/>
	I0210 13:14:20.992010  682172 main.go:141] libmachine: (old-k8s-version-745712)       <target dev='hdc' bus='scsi'/>
	I0210 13:14:20.992028  682172 main.go:141] libmachine: (old-k8s-version-745712)       <readonly/>
	I0210 13:14:20.992050  682172 main.go:141] libmachine: (old-k8s-version-745712)     </disk>
	I0210 13:14:20.992060  682172 main.go:141] libmachine: (old-k8s-version-745712)     <disk type='file' device='disk'>
	I0210 13:14:20.992069  682172 main.go:141] libmachine: (old-k8s-version-745712)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0210 13:14:20.992086  682172 main.go:141] libmachine: (old-k8s-version-745712)       <source file='/home/jenkins/minikube-integration/20383-625153/.minikube/machines/old-k8s-version-745712/old-k8s-version-745712.rawdisk'/>
	I0210 13:14:20.992101  682172 main.go:141] libmachine: (old-k8s-version-745712)       <target dev='hda' bus='virtio'/>
	I0210 13:14:20.992127  682172 main.go:141] libmachine: (old-k8s-version-745712)     </disk>
	I0210 13:14:20.992155  682172 main.go:141] libmachine: (old-k8s-version-745712)     <interface type='network'>
	I0210 13:14:20.992168  682172 main.go:141] libmachine: (old-k8s-version-745712)       <source network='mk-old-k8s-version-745712'/>
	I0210 13:14:20.992177  682172 main.go:141] libmachine: (old-k8s-version-745712)       <model type='virtio'/>
	I0210 13:14:20.992182  682172 main.go:141] libmachine: (old-k8s-version-745712)     </interface>
	I0210 13:14:20.992189  682172 main.go:141] libmachine: (old-k8s-version-745712)     <interface type='network'>
	I0210 13:14:20.992195  682172 main.go:141] libmachine: (old-k8s-version-745712)       <source network='default'/>
	I0210 13:14:20.992202  682172 main.go:141] libmachine: (old-k8s-version-745712)       <model type='virtio'/>
	I0210 13:14:20.992207  682172 main.go:141] libmachine: (old-k8s-version-745712)     </interface>
	I0210 13:14:20.992213  682172 main.go:141] libmachine: (old-k8s-version-745712)     <serial type='pty'>
	I0210 13:14:20.992218  682172 main.go:141] libmachine: (old-k8s-version-745712)       <target port='0'/>
	I0210 13:14:20.992224  682172 main.go:141] libmachine: (old-k8s-version-745712)     </serial>
	I0210 13:14:20.992241  682172 main.go:141] libmachine: (old-k8s-version-745712)     <console type='pty'>
	I0210 13:14:20.992260  682172 main.go:141] libmachine: (old-k8s-version-745712)       <target type='serial' port='0'/>
	I0210 13:14:20.992272  682172 main.go:141] libmachine: (old-k8s-version-745712)     </console>
	I0210 13:14:20.992283  682172 main.go:141] libmachine: (old-k8s-version-745712)     <rng model='virtio'>
	I0210 13:14:20.992297  682172 main.go:141] libmachine: (old-k8s-version-745712)       <backend model='random'>/dev/random</backend>
	I0210 13:14:20.992307  682172 main.go:141] libmachine: (old-k8s-version-745712)     </rng>
	I0210 13:14:20.992317  682172 main.go:141] libmachine: (old-k8s-version-745712)     
	I0210 13:14:20.992332  682172 main.go:141] libmachine: (old-k8s-version-745712)     
	I0210 13:14:20.992350  682172 main.go:141] libmachine: (old-k8s-version-745712)   </devices>
	I0210 13:14:20.992360  682172 main.go:141] libmachine: (old-k8s-version-745712) </domain>
	I0210 13:14:20.992372  682172 main.go:141] libmachine: (old-k8s-version-745712) 
	I0210 13:14:20.997405  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:f0:b9:c2 in network default
	I0210 13:14:20.998051  682172 main.go:141] libmachine: (old-k8s-version-745712) starting domain...
	I0210 13:14:20.998081  682172 main.go:141] libmachine: (old-k8s-version-745712) ensuring networks are active...
	I0210 13:14:20.998095  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:20.998772  682172 main.go:141] libmachine: (old-k8s-version-745712) Ensuring network default is active
	I0210 13:14:20.999084  682172 main.go:141] libmachine: (old-k8s-version-745712) Ensuring network mk-old-k8s-version-745712 is active
	I0210 13:14:20.999589  682172 main.go:141] libmachine: (old-k8s-version-745712) getting domain XML...
	I0210 13:14:21.000284  682172 main.go:141] libmachine: (old-k8s-version-745712) creating domain...
	I0210 13:14:22.494518  682172 main.go:141] libmachine: (old-k8s-version-745712) waiting for IP...
	I0210 13:14:22.495570  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:22.496181  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:14:22.496248  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:22.496181  682195 retry.go:31] will retry after 206.585724ms: waiting for domain to come up
	I0210 13:14:22.704930  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:22.705536  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:14:22.705564  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:22.705504  682195 retry.go:31] will retry after 261.073956ms: waiting for domain to come up
	I0210 13:14:22.968114  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:22.968968  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:14:22.968999  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:22.968859  682195 retry.go:31] will retry after 420.810796ms: waiting for domain to come up
	I0210 13:14:23.391338  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:23.391957  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:14:23.392032  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:23.391952  682195 retry.go:31] will retry after 464.402737ms: waiting for domain to come up
	I0210 13:14:23.857978  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:23.858996  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:14:23.859024  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:23.858948  682195 retry.go:31] will retry after 734.537527ms: waiting for domain to come up
	I0210 13:14:24.595493  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:24.596176  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:14:24.596214  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:24.596068  682195 retry.go:31] will retry after 804.337114ms: waiting for domain to come up
	I0210 13:14:25.402037  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:25.402621  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:14:25.402647  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:25.402526  682195 retry.go:31] will retry after 932.090459ms: waiting for domain to come up
	I0210 13:14:26.336776  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:26.337362  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:14:26.337426  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:26.337328  682195 retry.go:31] will retry after 1.137737045s: waiting for domain to come up
	I0210 13:14:27.476268  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:27.476832  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:14:27.476866  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:27.476802  682195 retry.go:31] will retry after 1.158429769s: waiting for domain to come up
	I0210 13:14:28.637058  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:28.637613  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:14:28.637648  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:28.637566  682195 retry.go:31] will retry after 2.042710093s: waiting for domain to come up
	I0210 13:14:30.682074  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:30.682779  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:14:30.682805  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:30.682745  682195 retry.go:31] will retry after 2.466781545s: waiting for domain to come up
	I0210 13:14:33.151483  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:33.152045  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:14:33.152077  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:33.152021  682195 retry.go:31] will retry after 3.390544227s: waiting for domain to come up
	I0210 13:14:36.543797  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:36.544313  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:14:36.544343  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:36.544281  682195 retry.go:31] will retry after 3.699439795s: waiting for domain to come up
	I0210 13:14:40.246779  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:40.247429  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:14:40.247461  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:14:40.247400  682195 retry.go:31] will retry after 3.559256722s: waiting for domain to come up
	I0210 13:14:43.810152  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:43.810621  682172 main.go:141] libmachine: (old-k8s-version-745712) found domain IP: 192.168.72.78
	I0210 13:14:43.810651  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has current primary IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:43.810661  682172 main.go:141] libmachine: (old-k8s-version-745712) reserving static IP address...
	I0210 13:14:43.810994  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-745712", mac: "52:54:00:dd:e4:89", ip: "192.168.72.78"} in network mk-old-k8s-version-745712
	I0210 13:14:43.892254  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | Getting to WaitForSSH function...
	I0210 13:14:43.892301  682172 main.go:141] libmachine: (old-k8s-version-745712) reserved static IP address 192.168.72.78 for domain old-k8s-version-745712
	I0210 13:14:43.892316  682172 main.go:141] libmachine: (old-k8s-version-745712) waiting for SSH...
	I0210 13:14:43.894874  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:43.895314  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:14:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:minikube Clientid:01:52:54:00:dd:e4:89}
	I0210 13:14:43.895349  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:43.895479  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | Using SSH client type: external
	I0210 13:14:43.895503  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | Using SSH private key: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/old-k8s-version-745712/id_rsa (-rw-------)
	I0210 13:14:43.895574  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.78 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20383-625153/.minikube/machines/old-k8s-version-745712/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 13:14:43.895599  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | About to run SSH command:
	I0210 13:14:43.895613  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | exit 0
	I0210 13:14:44.020833  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | SSH cmd err, output: <nil>: 
	I0210 13:14:44.021136  682172 main.go:141] libmachine: (old-k8s-version-745712) KVM machine creation complete
	I0210 13:14:44.021449  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetConfigRaw
	I0210 13:14:44.022060  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .DriverName
	I0210 13:14:44.022270  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .DriverName
	I0210 13:14:44.022435  682172 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0210 13:14:44.022451  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetState
	I0210 13:14:44.023751  682172 main.go:141] libmachine: Detecting operating system of created instance...
	I0210 13:14:44.023767  682172 main.go:141] libmachine: Waiting for SSH to be available...
	I0210 13:14:44.023785  682172 main.go:141] libmachine: Getting to WaitForSSH function...
	I0210 13:14:44.023791  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHHostname
	I0210 13:14:44.026085  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:44.026472  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:14:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:14:44.026495  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:44.026694  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHPort
	I0210 13:14:44.026872  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:14:44.026993  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:14:44.027096  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHUsername
	I0210 13:14:44.027285  682172 main.go:141] libmachine: Using SSH client type: native
	I0210 13:14:44.027490  682172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 13:14:44.027506  682172 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0210 13:14:44.140228  682172 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 13:14:44.140257  682172 main.go:141] libmachine: Detecting the provisioner...
	I0210 13:14:44.140269  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHHostname
	I0210 13:14:44.143646  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:44.144059  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:14:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:14:44.144102  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:44.144328  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHPort
	I0210 13:14:44.144557  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:14:44.144760  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:14:44.144932  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHUsername
	I0210 13:14:44.145134  682172 main.go:141] libmachine: Using SSH client type: native
	I0210 13:14:44.145395  682172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 13:14:44.145412  682172 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0210 13:14:44.253584  682172 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0210 13:14:44.253658  682172 main.go:141] libmachine: found compatible host: buildroot
	I0210 13:14:44.253664  682172 main.go:141] libmachine: Provisioning with buildroot...
	I0210 13:14:44.253672  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetMachineName
	I0210 13:14:44.253938  682172 buildroot.go:166] provisioning hostname "old-k8s-version-745712"
	I0210 13:14:44.253972  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetMachineName
	I0210 13:14:44.254182  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHHostname
	I0210 13:14:44.256870  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:44.257207  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:14:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:14:44.257243  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:44.257426  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHPort
	I0210 13:14:44.257583  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:14:44.257750  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:14:44.257846  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHUsername
	I0210 13:14:44.258024  682172 main.go:141] libmachine: Using SSH client type: native
	I0210 13:14:44.258201  682172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 13:14:44.258212  682172 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-745712 && echo "old-k8s-version-745712" | sudo tee /etc/hostname
	I0210 13:14:44.378661  682172 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-745712
	
	I0210 13:14:44.378690  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHHostname
	I0210 13:14:44.381398  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:44.381693  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:14:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:14:44.381742  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:44.381863  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHPort
	I0210 13:14:44.382063  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:14:44.382245  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:14:44.382434  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHUsername
	I0210 13:14:44.382625  682172 main.go:141] libmachine: Using SSH client type: native
	I0210 13:14:44.382796  682172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 13:14:44.382812  682172 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-745712' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-745712/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-745712' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 13:14:44.501162  682172 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 13:14:44.501203  682172 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20383-625153/.minikube CaCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20383-625153/.minikube}
	I0210 13:14:44.501257  682172 buildroot.go:174] setting up certificates
	I0210 13:14:44.501271  682172 provision.go:84] configureAuth start
	I0210 13:14:44.501291  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetMachineName
	I0210 13:14:44.501672  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetIP
	I0210 13:14:44.504548  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:44.504926  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:14:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:14:44.504949  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:44.505145  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHHostname
	I0210 13:14:44.507689  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:44.508029  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:14:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:14:44.508060  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:44.508220  682172 provision.go:143] copyHostCerts
	I0210 13:14:44.508290  682172 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem, removing ...
	I0210 13:14:44.508305  682172 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem
	I0210 13:14:44.508359  682172 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem (1082 bytes)
	I0210 13:14:44.508471  682172 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem, removing ...
	I0210 13:14:44.508479  682172 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem
	I0210 13:14:44.508498  682172 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem (1123 bytes)
	I0210 13:14:44.508561  682172 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem, removing ...
	I0210 13:14:44.508570  682172 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem
	I0210 13:14:44.508589  682172 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem (1675 bytes)
	I0210 13:14:44.508660  682172 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-745712 san=[127.0.0.1 192.168.72.78 localhost minikube old-k8s-version-745712]
	I0210 13:14:44.675648  682172 provision.go:177] copyRemoteCerts
	I0210 13:14:44.675714  682172 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 13:14:44.675749  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHHostname
	I0210 13:14:44.678547  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:44.678865  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:14:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:14:44.678907  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:44.679050  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHPort
	I0210 13:14:44.679246  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:14:44.679390  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHUsername
	I0210 13:14:44.679500  682172 sshutil.go:53] new ssh client: &{IP:192.168.72.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/old-k8s-version-745712/id_rsa Username:docker}
	I0210 13:14:44.763002  682172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 13:14:44.786811  682172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0210 13:14:44.810662  682172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 13:14:44.833463  682172 provision.go:87] duration metric: took 332.173507ms to configureAuth
	I0210 13:14:44.833497  682172 buildroot.go:189] setting minikube options for container-runtime
	I0210 13:14:44.833698  682172 config.go:182] Loaded profile config "old-k8s-version-745712": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0210 13:14:44.833805  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHHostname
	I0210 13:14:44.836736  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:44.837062  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:14:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:14:44.837123  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:44.837336  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHPort
	I0210 13:14:44.837539  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:14:44.837673  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:14:44.837828  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHUsername
	I0210 13:14:44.837974  682172 main.go:141] libmachine: Using SSH client type: native
	I0210 13:14:44.838174  682172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 13:14:44.838188  682172 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 13:14:45.060121  682172 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 13:14:45.060154  682172 main.go:141] libmachine: Checking connection to Docker...
	I0210 13:14:45.060164  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetURL
	I0210 13:14:45.061454  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | using libvirt version 6000000
	I0210 13:14:45.063708  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:45.064054  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:14:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:14:45.064087  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:45.064228  682172 main.go:141] libmachine: Docker is up and running!
	I0210 13:14:45.064246  682172 main.go:141] libmachine: Reticulating splines...
	I0210 13:14:45.064256  682172 client.go:171] duration metric: took 24.724453722s to LocalClient.Create
	I0210 13:14:45.064286  682172 start.go:167] duration metric: took 24.724536197s to libmachine.API.Create "old-k8s-version-745712"
	I0210 13:14:45.064304  682172 start.go:293] postStartSetup for "old-k8s-version-745712" (driver="kvm2")
	I0210 13:14:45.064313  682172 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 13:14:45.064331  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .DriverName
	I0210 13:14:45.064563  682172 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 13:14:45.064588  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHHostname
	I0210 13:14:45.066881  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:45.067208  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:14:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:14:45.067237  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:45.067372  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHPort
	I0210 13:14:45.067556  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:14:45.067776  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHUsername
	I0210 13:14:45.067930  682172 sshutil.go:53] new ssh client: &{IP:192.168.72.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/old-k8s-version-745712/id_rsa Username:docker}
	I0210 13:14:45.152456  682172 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 13:14:45.156303  682172 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 13:14:45.156333  682172 filesync.go:126] Scanning /home/jenkins/minikube-integration/20383-625153/.minikube/addons for local assets ...
	I0210 13:14:45.156399  682172 filesync.go:126] Scanning /home/jenkins/minikube-integration/20383-625153/.minikube/files for local assets ...
	I0210 13:14:45.156475  682172 filesync.go:149] local asset: /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem -> 6323522.pem in /etc/ssl/certs
	I0210 13:14:45.156768  682172 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 13:14:45.168386  682172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem --> /etc/ssl/certs/6323522.pem (1708 bytes)
	I0210 13:14:45.193029  682172 start.go:296] duration metric: took 128.706252ms for postStartSetup
	I0210 13:14:45.193125  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetConfigRaw
	I0210 13:14:45.193764  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetIP
	I0210 13:14:45.196488  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:45.196869  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:14:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:14:45.196899  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:45.197176  682172 profile.go:143] Saving config to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/config.json ...
	I0210 13:14:45.197372  682172 start.go:128] duration metric: took 24.877368067s to createHost
	I0210 13:14:45.197420  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHHostname
	I0210 13:14:45.199809  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:45.200159  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:14:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:14:45.200197  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:45.200314  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHPort
	I0210 13:14:45.200516  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:14:45.200669  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:14:45.200785  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHUsername
	I0210 13:14:45.200942  682172 main.go:141] libmachine: Using SSH client type: native
	I0210 13:14:45.201176  682172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 13:14:45.201190  682172 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 13:14:45.305699  682172 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739193285.277200269
	
	I0210 13:14:45.305728  682172 fix.go:216] guest clock: 1739193285.277200269
	I0210 13:14:45.305735  682172 fix.go:229] Guest: 2025-02-10 13:14:45.277200269 +0000 UTC Remote: 2025-02-10 13:14:45.19738607 +0000 UTC m=+24.995254328 (delta=79.814199ms)
	I0210 13:14:45.305767  682172 fix.go:200] guest clock delta is within tolerance: 79.814199ms
	I0210 13:14:45.305773  682172 start.go:83] releasing machines lock for "old-k8s-version-745712", held for 24.985875078s
	I0210 13:14:45.305795  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .DriverName
	I0210 13:14:45.306093  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetIP
	I0210 13:14:45.309461  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:45.309915  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:14:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:14:45.309950  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:45.310216  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .DriverName
	I0210 13:14:45.310695  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .DriverName
	I0210 13:14:45.310875  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .DriverName
	I0210 13:14:45.310954  682172 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 13:14:45.311005  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHHostname
	I0210 13:14:45.311276  682172 ssh_runner.go:195] Run: cat /version.json
	I0210 13:14:45.311320  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHHostname
	I0210 13:14:45.313909  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:45.314051  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:45.314291  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:14:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:14:45.314317  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:45.314494  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:14:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:14:45.314506  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHPort
	I0210 13:14:45.314528  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:45.314702  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:14:45.314707  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHPort
	I0210 13:14:45.314882  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:14:45.314897  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHUsername
	I0210 13:14:45.315021  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHUsername
	I0210 13:14:45.315074  682172 sshutil.go:53] new ssh client: &{IP:192.168.72.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/old-k8s-version-745712/id_rsa Username:docker}
	I0210 13:14:45.315191  682172 sshutil.go:53] new ssh client: &{IP:192.168.72.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/old-k8s-version-745712/id_rsa Username:docker}
	I0210 13:14:45.418108  682172 ssh_runner.go:195] Run: systemctl --version
	I0210 13:14:45.423784  682172 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 13:14:45.584490  682172 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 13:14:45.591240  682172 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 13:14:45.591326  682172 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 13:14:45.608851  682172 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 13:14:45.608886  682172 start.go:495] detecting cgroup driver to use...
	I0210 13:14:45.608983  682172 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 13:14:45.629788  682172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 13:14:45.646330  682172 docker.go:217] disabling cri-docker service (if available) ...
	I0210 13:14:45.646412  682172 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 13:14:45.660209  682172 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 13:14:45.672873  682172 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 13:14:45.782708  682172 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 13:14:45.930552  682172 docker.go:233] disabling docker service ...
	I0210 13:14:45.930626  682172 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 13:14:45.943991  682172 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 13:14:45.956198  682172 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 13:14:46.063464  682172 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 13:14:46.180105  682172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 13:14:46.193598  682172 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 13:14:46.210559  682172 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0210 13:14:46.210616  682172 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:14:46.219804  682172 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 13:14:46.219884  682172 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:14:46.229343  682172 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:14:46.238805  682172 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:14:46.248612  682172 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 13:14:46.258352  682172 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 13:14:46.267273  682172 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 13:14:46.267335  682172 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 13:14:46.280542  682172 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 13:14:46.296367  682172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:14:46.410767  682172 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 13:14:46.503654  682172 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 13:14:46.503746  682172 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 13:14:46.509087  682172 start.go:563] Will wait 60s for crictl version
	I0210 13:14:46.509182  682172 ssh_runner.go:195] Run: which crictl
	I0210 13:14:46.512971  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 13:14:46.557031  682172 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 13:14:46.557133  682172 ssh_runner.go:195] Run: crio --version
	I0210 13:14:46.592284  682172 ssh_runner.go:195] Run: crio --version
	I0210 13:14:46.694777  682172 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0210 13:14:46.787922  682172 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetIP
	I0210 13:14:46.790874  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:46.791161  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:14:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:14:46.791188  682172 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:14:46.791395  682172 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0210 13:14:46.795489  682172 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:14:46.806964  682172 kubeadm.go:883] updating cluster {Name:old-k8s-version-745712 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-745712 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.78 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 13:14:46.807082  682172 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 13:14:46.807127  682172 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:14:46.836459  682172 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0210 13:14:46.836543  682172 ssh_runner.go:195] Run: which lz4
	I0210 13:14:46.840213  682172 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 13:14:46.843959  682172 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 13:14:46.843996  682172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0210 13:14:48.232758  682172 crio.go:462] duration metric: took 1.392570671s to copy over tarball
	I0210 13:14:48.232838  682172 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 13:14:50.778971  682172 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.546100658s)
	I0210 13:14:50.779014  682172 crio.go:469] duration metric: took 2.546215442s to extract the tarball
	I0210 13:14:50.779025  682172 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 13:14:50.823187  682172 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:14:50.866620  682172 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0210 13:14:50.866659  682172 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0210 13:14:50.866738  682172 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:14:50.866772  682172 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:14:50.866801  682172 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:14:50.866822  682172 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0210 13:14:50.866838  682172 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0210 13:14:50.866805  682172 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0210 13:14:50.866868  682172 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:14:50.866722  682172 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:14:50.868421  682172 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0210 13:14:50.868433  682172 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:14:50.868427  682172 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:14:50.868427  682172 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:14:50.868425  682172 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:14:50.868433  682172 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:14:50.868490  682172 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0210 13:14:50.868658  682172 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0210 13:14:51.010064  682172 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0210 13:14:51.010325  682172 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:14:51.016725  682172 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:14:51.022451  682172 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:14:51.023151  682172 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:14:51.038464  682172 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0210 13:14:51.043336  682172 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0210 13:14:51.143129  682172 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0210 13:14:51.143163  682172 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0210 13:14:51.143184  682172 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0210 13:14:51.143203  682172 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:14:51.143251  682172 ssh_runner.go:195] Run: which crictl
	I0210 13:14:51.143252  682172 ssh_runner.go:195] Run: which crictl
	I0210 13:14:51.200617  682172 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0210 13:14:51.200672  682172 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:14:51.200733  682172 ssh_runner.go:195] Run: which crictl
	I0210 13:14:51.200794  682172 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0210 13:14:51.200841  682172 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:14:51.200890  682172 ssh_runner.go:195] Run: which crictl
	I0210 13:14:51.210121  682172 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0210 13:14:51.210183  682172 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:14:51.210241  682172 ssh_runner.go:195] Run: which crictl
	I0210 13:14:51.217678  682172 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0210 13:14:51.217711  682172 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0210 13:14:51.217735  682172 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0210 13:14:51.217748  682172 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0210 13:14:51.217785  682172 ssh_runner.go:195] Run: which crictl
	I0210 13:14:51.217819  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:14:51.217834  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:14:51.217783  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 13:14:51.217789  682172 ssh_runner.go:195] Run: which crictl
	I0210 13:14:51.217873  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:14:51.217928  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:14:51.242164  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 13:14:51.339431  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:14:51.350089  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:14:51.350162  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:14:51.350172  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:14:51.350219  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 13:14:51.350263  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 13:14:51.393393  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 13:14:51.467328  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:14:51.544552  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:14:51.600415  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 13:14:51.600471  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 13:14:51.600494  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:14:51.600504  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:14:51.600546  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 13:14:51.600552  682172 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0210 13:14:51.600608  682172 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0210 13:14:51.714794  682172 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0210 13:14:51.714950  682172 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0210 13:14:51.714964  682172 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0210 13:14:51.714965  682172 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 13:14:51.714997  682172 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0210 13:14:51.748728  682172 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0210 13:14:51.862100  682172 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:14:52.004078  682172 cache_images.go:92] duration metric: took 1.137397454s to LoadCachedImages
	W0210 13:14:52.004212  682172 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0210 13:14:52.004234  682172 kubeadm.go:934] updating node { 192.168.72.78 8443 v1.20.0 crio true true} ...
	I0210 13:14:52.004371  682172 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-745712 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-745712 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 13:14:52.004470  682172 ssh_runner.go:195] Run: crio config
	I0210 13:14:52.058352  682172 cni.go:84] Creating CNI manager for ""
	I0210 13:14:52.058379  682172 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:14:52.058398  682172 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 13:14:52.058426  682172 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.78 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-745712 NodeName:old-k8s-version-745712 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0210 13:14:52.058596  682172 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-745712"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 13:14:52.058677  682172 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0210 13:14:52.068572  682172 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 13:14:52.068648  682172 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 13:14:52.077778  682172 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0210 13:14:52.094300  682172 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 13:14:52.110040  682172 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0210 13:14:52.126523  682172 ssh_runner.go:195] Run: grep 192.168.72.78	control-plane.minikube.internal$ /etc/hosts
	I0210 13:14:52.130428  682172 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:14:52.142166  682172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:14:52.277069  682172 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:14:52.296962  682172 certs.go:68] Setting up /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712 for IP: 192.168.72.78
	I0210 13:14:52.296993  682172 certs.go:194] generating shared ca certs ...
	I0210 13:14:52.297018  682172 certs.go:226] acquiring lock for ca certs: {Name:mkf2f72f82a6bd7e3c16bb224cd26b80c3c89e0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:14:52.297270  682172 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key
	I0210 13:14:52.297350  682172 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key
	I0210 13:14:52.297368  682172 certs.go:256] generating profile certs ...
	I0210 13:14:52.297459  682172 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/client.key
	I0210 13:14:52.297486  682172 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/client.crt with IP's: []
	I0210 13:14:52.398082  682172 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/client.crt ...
	I0210 13:14:52.398126  682172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/client.crt: {Name:mk8866bedf6d0c5eea5b5f16d35fdcfa165da624 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:14:52.398338  682172 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/client.key ...
	I0210 13:14:52.398362  682172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/client.key: {Name:mkf487e26cdec63e0d8d0db38f4f23879595358c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:14:52.398502  682172 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/apiserver.key.f20ca5cb
	I0210 13:14:52.398526  682172 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/apiserver.crt.f20ca5cb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.78]
	I0210 13:14:52.784120  682172 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/apiserver.crt.f20ca5cb ...
	I0210 13:14:52.784155  682172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/apiserver.crt.f20ca5cb: {Name:mk9f46fd8c7564133fd155ed7105da02cdceae90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:14:52.784371  682172 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/apiserver.key.f20ca5cb ...
	I0210 13:14:52.784392  682172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/apiserver.key.f20ca5cb: {Name:mkd38e78e7e6d443ed74e1b1ace0177e17da5c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:14:52.784520  682172 certs.go:381] copying /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/apiserver.crt.f20ca5cb -> /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/apiserver.crt
	I0210 13:14:52.784600  682172 certs.go:385] copying /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/apiserver.key.f20ca5cb -> /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/apiserver.key
	I0210 13:14:52.784652  682172 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/proxy-client.key
	I0210 13:14:52.784668  682172 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/proxy-client.crt with IP's: []
	I0210 13:14:52.860364  682172 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/proxy-client.crt ...
	I0210 13:14:52.860393  682172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/proxy-client.crt: {Name:mkbc9c45bc194843a3245f499f35dfc4d9032392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:14:52.860565  682172 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/proxy-client.key ...
	I0210 13:14:52.860613  682172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/proxy-client.key: {Name:mk2d6b8b2caaef24eda83dd2b91a9822c51e0d7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:14:52.860872  682172 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352.pem (1338 bytes)
	W0210 13:14:52.860914  682172 certs.go:480] ignoring /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352_empty.pem, impossibly tiny 0 bytes
	I0210 13:14:52.860925  682172 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem (1675 bytes)
	I0210 13:14:52.860945  682172 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem (1082 bytes)
	I0210 13:14:52.860967  682172 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem (1123 bytes)
	I0210 13:14:52.860987  682172 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem (1675 bytes)
	I0210 13:14:52.861025  682172 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem (1708 bytes)
	I0210 13:14:52.861651  682172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 13:14:52.887216  682172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 13:14:52.909860  682172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 13:14:52.932644  682172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 13:14:52.954608  682172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0210 13:14:53.000437  682172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 13:14:53.035269  682172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 13:14:53.143691  682172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0210 13:14:53.217151  682172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 13:14:53.247992  682172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352.pem --> /usr/share/ca-certificates/632352.pem (1338 bytes)
	I0210 13:14:53.270692  682172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem --> /usr/share/ca-certificates/6323522.pem (1708 bytes)
	I0210 13:14:53.294011  682172 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 13:14:53.309374  682172 ssh_runner.go:195] Run: openssl version
	I0210 13:14:53.314628  682172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 13:14:53.324649  682172 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:14:53.328766  682172 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:14:53.328818  682172 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:14:53.334309  682172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 13:14:53.344678  682172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/632352.pem && ln -fs /usr/share/ca-certificates/632352.pem /etc/ssl/certs/632352.pem"
	I0210 13:14:53.354978  682172 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/632352.pem
	I0210 13:14:53.358973  682172 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 12:13 /usr/share/ca-certificates/632352.pem
	I0210 13:14:53.359025  682172 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/632352.pem
	I0210 13:14:53.364252  682172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/632352.pem /etc/ssl/certs/51391683.0"
	I0210 13:14:53.374540  682172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6323522.pem && ln -fs /usr/share/ca-certificates/6323522.pem /etc/ssl/certs/6323522.pem"
	I0210 13:14:53.385028  682172 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6323522.pem
	I0210 13:14:53.389180  682172 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 12:13 /usr/share/ca-certificates/6323522.pem
	I0210 13:14:53.389226  682172 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6323522.pem
	I0210 13:14:53.394483  682172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6323522.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 13:14:53.405602  682172 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 13:14:53.409366  682172 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 13:14:53.409432  682172 kubeadm.go:392] StartCluster: {Name:old-k8s-version-745712 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-745712 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.78 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:14:53.409535  682172 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 13:14:53.409614  682172 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:14:53.445633  682172 cri.go:89] found id: ""
	I0210 13:14:53.445703  682172 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 13:14:53.455890  682172 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 13:14:53.464893  682172 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:14:53.474970  682172 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:14:53.474992  682172 kubeadm.go:157] found existing configuration files:
	
	I0210 13:14:53.475046  682172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:14:53.484704  682172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:14:53.484768  682172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:14:53.494631  682172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:14:53.504400  682172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:14:53.504489  682172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:14:53.513989  682172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:14:53.522408  682172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:14:53.522473  682172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:14:53.531257  682172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:14:53.539586  682172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:14:53.539644  682172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:14:53.548213  682172 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 13:14:53.803405  682172 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 13:16:52.014082  682172 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 13:16:52.014169  682172 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 13:16:52.015526  682172 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 13:16:52.015566  682172 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 13:16:52.015628  682172 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 13:16:52.015748  682172 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 13:16:52.015866  682172 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 13:16:52.015973  682172 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 13:16:52.017897  682172 out.go:235]   - Generating certificates and keys ...
	I0210 13:16:52.017986  682172 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 13:16:52.018046  682172 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 13:16:52.018110  682172 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0210 13:16:52.018159  682172 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0210 13:16:52.018248  682172 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0210 13:16:52.018352  682172 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0210 13:16:52.018451  682172 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0210 13:16:52.018578  682172 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-745712] and IPs [192.168.72.78 127.0.0.1 ::1]
	I0210 13:16:52.018648  682172 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0210 13:16:52.018825  682172 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-745712] and IPs [192.168.72.78 127.0.0.1 ::1]
	I0210 13:16:52.018928  682172 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0210 13:16:52.019021  682172 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0210 13:16:52.019093  682172 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0210 13:16:52.019174  682172 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 13:16:52.019256  682172 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 13:16:52.019343  682172 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 13:16:52.019432  682172 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 13:16:52.019513  682172 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 13:16:52.019630  682172 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 13:16:52.019747  682172 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 13:16:52.019801  682172 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 13:16:52.019892  682172 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 13:16:52.021521  682172 out.go:235]   - Booting up control plane ...
	I0210 13:16:52.021611  682172 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 13:16:52.021694  682172 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 13:16:52.021817  682172 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 13:16:52.021899  682172 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 13:16:52.022045  682172 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 13:16:52.022097  682172 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 13:16:52.022159  682172 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:16:52.022349  682172 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:16:52.022423  682172 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:16:52.022584  682172 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:16:52.022645  682172 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:16:52.022820  682172 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:16:52.022915  682172 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:16:52.023076  682172 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:16:52.023136  682172 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:16:52.023295  682172 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:16:52.023301  682172 kubeadm.go:310] 
	I0210 13:16:52.023335  682172 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 13:16:52.023374  682172 kubeadm.go:310] 		timed out waiting for the condition
	I0210 13:16:52.023381  682172 kubeadm.go:310] 
	I0210 13:16:52.023408  682172 kubeadm.go:310] 	This error is likely caused by:
	I0210 13:16:52.023452  682172 kubeadm.go:310] 		- The kubelet is not running
	I0210 13:16:52.023565  682172 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 13:16:52.023576  682172 kubeadm.go:310] 
	I0210 13:16:52.023655  682172 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 13:16:52.023683  682172 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 13:16:52.023714  682172 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 13:16:52.023721  682172 kubeadm.go:310] 
	I0210 13:16:52.023818  682172 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 13:16:52.023890  682172 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 13:16:52.023896  682172 kubeadm.go:310] 
	I0210 13:16:52.023997  682172 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 13:16:52.024075  682172 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 13:16:52.024141  682172 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 13:16:52.024236  682172 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 13:16:52.024281  682172 kubeadm.go:310] 
	W0210 13:16:52.024382  682172 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-745712] and IPs [192.168.72.78 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-745712] and IPs [192.168.72.78 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-745712] and IPs [192.168.72.78 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-745712] and IPs [192.168.72.78 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0210 13:16:52.024434  682172 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 13:16:52.517319  682172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:16:52.531791  682172 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:16:52.541180  682172 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:16:52.541203  682172 kubeadm.go:157] found existing configuration files:
	
	I0210 13:16:52.541253  682172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:16:52.550141  682172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:16:52.550201  682172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:16:52.559060  682172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:16:52.567404  682172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:16:52.567476  682172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:16:52.576679  682172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:16:52.585524  682172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:16:52.585595  682172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:16:52.594776  682172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:16:52.603815  682172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:16:52.603873  682172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:16:52.612996  682172 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 13:16:52.831260  682172 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 13:18:48.904473  682172 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 13:18:48.904609  682172 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 13:18:48.906409  682172 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 13:18:48.906513  682172 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 13:18:48.906646  682172 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 13:18:48.906778  682172 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 13:18:48.906920  682172 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 13:18:48.907024  682172 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 13:18:48.908270  682172 out.go:235]   - Generating certificates and keys ...
	I0210 13:18:48.908368  682172 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 13:18:48.908489  682172 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 13:18:48.908601  682172 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 13:18:48.908682  682172 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 13:18:48.908770  682172 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 13:18:48.908847  682172 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 13:18:48.908930  682172 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 13:18:48.909012  682172 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 13:18:48.909142  682172 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 13:18:48.909243  682172 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 13:18:48.909298  682172 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 13:18:48.909373  682172 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 13:18:48.909445  682172 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 13:18:48.909517  682172 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 13:18:48.909600  682172 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 13:18:48.909673  682172 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 13:18:48.909801  682172 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 13:18:48.909918  682172 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 13:18:48.909974  682172 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 13:18:48.910063  682172 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 13:18:48.911882  682172 out.go:235]   - Booting up control plane ...
	I0210 13:18:48.911999  682172 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 13:18:48.912130  682172 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 13:18:48.912230  682172 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 13:18:48.912325  682172 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 13:18:48.912541  682172 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 13:18:48.912615  682172 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 13:18:48.912743  682172 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:18:48.912934  682172 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:18:48.913028  682172 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:18:48.913294  682172 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:18:48.913383  682172 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:18:48.913622  682172 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:18:48.913715  682172 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:18:48.913908  682172 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:18:48.913964  682172 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:18:48.914112  682172 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:18:48.914116  682172 kubeadm.go:310] 
	I0210 13:18:48.914157  682172 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 13:18:48.914189  682172 kubeadm.go:310] 		timed out waiting for the condition
	I0210 13:18:48.914193  682172 kubeadm.go:310] 
	I0210 13:18:48.914221  682172 kubeadm.go:310] 	This error is likely caused by:
	I0210 13:18:48.914250  682172 kubeadm.go:310] 		- The kubelet is not running
	I0210 13:18:48.914334  682172 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 13:18:48.914342  682172 kubeadm.go:310] 
	I0210 13:18:48.914429  682172 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 13:18:48.914455  682172 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 13:18:48.914481  682172 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 13:18:48.914484  682172 kubeadm.go:310] 
	I0210 13:18:48.914567  682172 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 13:18:48.914633  682172 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 13:18:48.914637  682172 kubeadm.go:310] 
	I0210 13:18:48.914728  682172 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 13:18:48.914802  682172 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 13:18:48.914864  682172 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 13:18:48.914921  682172 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 13:18:48.914987  682172 kubeadm.go:394] duration metric: took 3m55.505567485s to StartCluster
	I0210 13:18:48.915055  682172 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:18:48.915110  682172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:18:48.915183  682172 kubeadm.go:310] 
	I0210 13:18:48.958318  682172 cri.go:89] found id: ""
	I0210 13:18:48.958347  682172 logs.go:282] 0 containers: []
	W0210 13:18:48.958357  682172 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:18:48.958365  682172 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:18:48.958436  682172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:18:48.997185  682172 cri.go:89] found id: ""
	I0210 13:18:48.997220  682172 logs.go:282] 0 containers: []
	W0210 13:18:48.997231  682172 logs.go:284] No container was found matching "etcd"
	I0210 13:18:48.997239  682172 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:18:48.997301  682172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:18:49.029716  682172 cri.go:89] found id: ""
	I0210 13:18:49.029747  682172 logs.go:282] 0 containers: []
	W0210 13:18:49.029757  682172 logs.go:284] No container was found matching "coredns"
	I0210 13:18:49.029765  682172 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:18:49.029839  682172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:18:49.074664  682172 cri.go:89] found id: ""
	I0210 13:18:49.074713  682172 logs.go:282] 0 containers: []
	W0210 13:18:49.074725  682172 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:18:49.074734  682172 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:18:49.074801  682172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:18:49.115179  682172 cri.go:89] found id: ""
	I0210 13:18:49.115213  682172 logs.go:282] 0 containers: []
	W0210 13:18:49.115225  682172 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:18:49.115234  682172 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:18:49.115301  682172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:18:49.154581  682172 cri.go:89] found id: ""
	I0210 13:18:49.154621  682172 logs.go:282] 0 containers: []
	W0210 13:18:49.154634  682172 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:18:49.154643  682172 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:18:49.154711  682172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:18:49.193755  682172 cri.go:89] found id: ""
	I0210 13:18:49.193800  682172 logs.go:282] 0 containers: []
	W0210 13:18:49.193814  682172 logs.go:284] No container was found matching "kindnet"
	I0210 13:18:49.193829  682172 logs.go:123] Gathering logs for dmesg ...
	I0210 13:18:49.193846  682172 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:18:49.208602  682172 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:18:49.208639  682172 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:18:49.342391  682172 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:18:49.342419  682172 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:18:49.342435  682172 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:18:49.441835  682172 logs.go:123] Gathering logs for container status ...
	I0210 13:18:49.441882  682172 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:18:49.482346  682172 logs.go:123] Gathering logs for kubelet ...
	I0210 13:18:49.482388  682172 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0210 13:18:49.530871  682172 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0210 13:18:49.530970  682172 out.go:270] * 
	* 
	W0210 13:18:49.531052  682172 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 13:18:49.531070  682172 out.go:270] * 
	* 
	W0210 13:18:49.531877  682172 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 13:18:49.535003  682172 out.go:201] 
	W0210 13:18:49.536073  682172 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 13:18:49.536110  682172 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0210 13:18:49.536144  682172 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0210 13:18:49.537372  682172 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-745712 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-745712 -n old-k8s-version-745712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-745712 -n old-k8s-version-745712: exit status 6 (243.302891ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0210 13:18:49.835300  687479 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-745712" does not appear in /home/jenkins/minikube-integration/20383-625153/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-745712" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (269.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-745712 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-745712 create -f testdata/busybox.yaml: exit status 1 (50.246466ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-745712" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-745712 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-745712 -n old-k8s-version-745712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-745712 -n old-k8s-version-745712: exit status 6 (257.34147ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0210 13:18:50.131008  687517 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-745712" does not appear in /home/jenkins/minikube-integration/20383-625153/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-745712" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-745712 -n old-k8s-version-745712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-745712 -n old-k8s-version-745712: exit status 6 (247.817313ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0210 13:18:50.390208  687547 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-745712" does not appear in /home/jenkins/minikube-integration/20383-625153/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-745712" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (91.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-745712 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-745712 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m30.932690605s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-745712 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-745712 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-745712 describe deploy/metrics-server -n kube-system: exit status 1 (51.932496ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-745712" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-745712 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-745712 -n old-k8s-version-745712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-745712 -n old-k8s-version-745712: exit status 6 (234.312548ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0210 13:20:21.612336  688780 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-745712" does not appear in /home/jenkins/minikube-integration/20383-625153/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-745712" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (91.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (511.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-745712 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0210 13:20:25.121716  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:20:36.599191  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/bridge-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:20:46.485279  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:21:01.464325  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:21:06.083124  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-745712 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m29.93629758s)

                                                
                                                
-- stdout --
	* [old-k8s-version-745712] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20383
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-745712" primary control-plane node in "old-k8s-version-745712" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-745712" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 13:20:25.132829  688914 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:20:25.132925  688914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:20:25.132933  688914 out.go:358] Setting ErrFile to fd 2...
	I0210 13:20:25.132937  688914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:20:25.133174  688914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
	I0210 13:20:25.133784  688914 out.go:352] Setting JSON to false
	I0210 13:20:25.134891  688914 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":18175,"bootTime":1739175450,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 13:20:25.134987  688914 start.go:139] virtualization: kvm guest
	I0210 13:20:25.136717  688914 out.go:177] * [old-k8s-version-745712] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 13:20:25.138317  688914 out.go:177]   - MINIKUBE_LOCATION=20383
	I0210 13:20:25.138342  688914 notify.go:220] Checking for updates...
	I0210 13:20:25.140403  688914 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 13:20:25.141566  688914 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:20:25.142569  688914 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	I0210 13:20:25.143667  688914 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 13:20:25.144784  688914 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 13:20:25.146447  688914 config.go:182] Loaded profile config "old-k8s-version-745712": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0210 13:20:25.147049  688914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:20:25.147132  688914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:20:25.162305  688914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34681
	I0210 13:20:25.162706  688914 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:20:25.163238  688914 main.go:141] libmachine: Using API Version  1
	I0210 13:20:25.163261  688914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:20:25.163633  688914 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:20:25.163845  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .DriverName
	I0210 13:20:25.165520  688914 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0210 13:20:25.166911  688914 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 13:20:25.167254  688914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:20:25.167289  688914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:20:25.183532  688914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41219
	I0210 13:20:25.183935  688914 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:20:25.184488  688914 main.go:141] libmachine: Using API Version  1
	I0210 13:20:25.184513  688914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:20:25.185028  688914 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:20:25.185260  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .DriverName
	I0210 13:20:25.221020  688914 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 13:20:25.222270  688914 start.go:297] selected driver: kvm2
	I0210 13:20:25.222287  688914 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-745712 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-7
45712 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.78 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:20:25.222444  688914 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 13:20:25.223098  688914 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:20:25.223202  688914 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20383-625153/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 13:20:25.239997  688914 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 13:20:25.240565  688914 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 13:20:25.240618  688914 cni.go:84] Creating CNI manager for ""
	I0210 13:20:25.240680  688914 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:20:25.240736  688914 start.go:340] cluster config:
	{Name:old-k8s-version-745712 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-745712 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.78 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:20:25.240854  688914 iso.go:125] acquiring lock: {Name:mk013d189757e85c0699014f5ef29205d9d4927f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:20:25.242737  688914 out.go:177] * Starting "old-k8s-version-745712" primary control-plane node in "old-k8s-version-745712" cluster
	I0210 13:20:25.243821  688914 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 13:20:25.243871  688914 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0210 13:20:25.243884  688914 cache.go:56] Caching tarball of preloaded images
	I0210 13:20:25.243952  688914 preload.go:172] Found /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 13:20:25.243973  688914 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0210 13:20:25.244088  688914 profile.go:143] Saving config to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/config.json ...
	I0210 13:20:25.244304  688914 start.go:360] acquireMachinesLock for old-k8s-version-745712: {Name:mk28e87da66de739a4c7c70d1fb5afc4ce31a4d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 13:20:25.244353  688914 start.go:364] duration metric: took 27.2µs to acquireMachinesLock for "old-k8s-version-745712"
	I0210 13:20:25.244373  688914 start.go:96] Skipping create...Using existing machine configuration
	I0210 13:20:25.244388  688914 fix.go:54] fixHost starting: 
	I0210 13:20:25.244672  688914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:20:25.244709  688914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:20:25.258528  688914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35925
	I0210 13:20:25.258890  688914 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:20:25.259330  688914 main.go:141] libmachine: Using API Version  1
	I0210 13:20:25.259354  688914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:20:25.259647  688914 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:20:25.259825  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .DriverName
	I0210 13:20:25.259965  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetState
	I0210 13:20:25.261482  688914 fix.go:112] recreateIfNeeded on old-k8s-version-745712: state=Stopped err=<nil>
	I0210 13:20:25.261505  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .DriverName
	W0210 13:20:25.261664  688914 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 13:20:25.263428  688914 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-745712" ...
	I0210 13:20:25.264555  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .Start
	I0210 13:20:25.264733  688914 main.go:141] libmachine: (old-k8s-version-745712) starting domain...
	I0210 13:20:25.264743  688914 main.go:141] libmachine: (old-k8s-version-745712) ensuring networks are active...
	I0210 13:20:25.265546  688914 main.go:141] libmachine: (old-k8s-version-745712) Ensuring network default is active
	I0210 13:20:25.265860  688914 main.go:141] libmachine: (old-k8s-version-745712) Ensuring network mk-old-k8s-version-745712 is active
	I0210 13:20:25.266150  688914 main.go:141] libmachine: (old-k8s-version-745712) getting domain XML...
	I0210 13:20:25.266836  688914 main.go:141] libmachine: (old-k8s-version-745712) creating domain...
	I0210 13:20:26.557749  688914 main.go:141] libmachine: (old-k8s-version-745712) waiting for IP...
	I0210 13:20:26.559028  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:26.559524  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:20:26.559638  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:20:26.559516  688949 retry.go:31] will retry after 216.534351ms: waiting for domain to come up
	I0210 13:20:26.778025  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:26.778582  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:20:26.778612  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:20:26.778550  688949 retry.go:31] will retry after 298.699006ms: waiting for domain to come up
	I0210 13:20:27.079343  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:27.079956  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:20:27.080011  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:20:27.079916  688949 retry.go:31] will retry after 380.300567ms: waiting for domain to come up
	I0210 13:20:27.461638  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:27.462324  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:20:27.462383  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:20:27.462305  688949 retry.go:31] will retry after 465.728796ms: waiting for domain to come up
	I0210 13:20:27.930316  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:27.930919  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:20:27.930951  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:20:27.930886  688949 retry.go:31] will retry after 569.699551ms: waiting for domain to come up
	I0210 13:20:28.502860  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:28.503552  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:20:28.503615  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:20:28.503533  688949 retry.go:31] will retry after 916.046368ms: waiting for domain to come up
	I0210 13:20:29.422004  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:29.422595  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:20:29.422626  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:20:29.422555  688949 retry.go:31] will retry after 1.123951051s: waiting for domain to come up
	I0210 13:20:30.548627  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:30.549267  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:20:30.549302  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:20:30.549197  688949 retry.go:31] will retry after 1.286694853s: waiting for domain to come up
	I0210 13:20:31.837642  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:31.838196  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:20:31.838223  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:20:31.838142  688949 retry.go:31] will retry after 1.434960419s: waiting for domain to come up
	I0210 13:20:33.274299  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:33.274768  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:20:33.274799  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:20:33.274738  688949 retry.go:31] will retry after 1.946308388s: waiting for domain to come up
	I0210 13:20:35.222327  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:35.222850  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:20:35.222880  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:20:35.222827  688949 retry.go:31] will retry after 2.749249596s: waiting for domain to come up
	I0210 13:20:37.974522  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:37.975009  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:20:37.975034  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:20:37.974977  688949 retry.go:31] will retry after 2.227337489s: waiting for domain to come up
	I0210 13:20:40.205295  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:40.205826  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | unable to find current IP address of domain old-k8s-version-745712 in network mk-old-k8s-version-745712
	I0210 13:20:40.205860  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | I0210 13:20:40.205772  688949 retry.go:31] will retry after 4.174682929s: waiting for domain to come up
	I0210 13:20:44.382166  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:44.382814  688914 main.go:141] libmachine: (old-k8s-version-745712) found domain IP: 192.168.72.78
	I0210 13:20:44.382841  688914 main.go:141] libmachine: (old-k8s-version-745712) reserving static IP address...
	I0210 13:20:44.382883  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has current primary IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:44.383246  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "old-k8s-version-745712", mac: "52:54:00:dd:e4:89", ip: "192.168.72.78"} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:20:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:20:44.383266  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | skip adding static IP to network mk-old-k8s-version-745712 - found existing host DHCP lease matching {name: "old-k8s-version-745712", mac: "52:54:00:dd:e4:89", ip: "192.168.72.78"}
	I0210 13:20:44.383276  688914 main.go:141] libmachine: (old-k8s-version-745712) reserved static IP address 192.168.72.78 for domain old-k8s-version-745712
	I0210 13:20:44.383294  688914 main.go:141] libmachine: (old-k8s-version-745712) waiting for SSH...
	I0210 13:20:44.383311  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | Getting to WaitForSSH function...
	I0210 13:20:44.385780  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:44.386176  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:20:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:20:44.386216  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:44.386373  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | Using SSH client type: external
	I0210 13:20:44.386395  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | Using SSH private key: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/old-k8s-version-745712/id_rsa (-rw-------)
	I0210 13:20:44.386414  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.78 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20383-625153/.minikube/machines/old-k8s-version-745712/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 13:20:44.386423  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | About to run SSH command:
	I0210 13:20:44.386435  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | exit 0
	I0210 13:20:44.512937  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | SSH cmd err, output: <nil>: 
	I0210 13:20:44.513385  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetConfigRaw
	I0210 13:20:44.514210  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetIP
	I0210 13:20:44.517126  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:44.517526  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:20:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:20:44.517559  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:44.517833  688914 profile.go:143] Saving config to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/config.json ...
	I0210 13:20:44.518042  688914 machine.go:93] provisionDockerMachine start ...
	I0210 13:20:44.518069  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .DriverName
	I0210 13:20:44.518278  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHHostname
	I0210 13:20:44.520673  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:44.520981  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:20:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:20:44.521020  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:44.521201  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHPort
	I0210 13:20:44.521364  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:20:44.521475  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:20:44.521651  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHUsername
	I0210 13:20:44.521777  688914 main.go:141] libmachine: Using SSH client type: native
	I0210 13:20:44.521981  688914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 13:20:44.521996  688914 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 13:20:44.633966  688914 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 13:20:44.634001  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetMachineName
	I0210 13:20:44.634259  688914 buildroot.go:166] provisioning hostname "old-k8s-version-745712"
	I0210 13:20:44.634282  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetMachineName
	I0210 13:20:44.634450  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHHostname
	I0210 13:20:44.637509  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:44.637909  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:20:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:20:44.637946  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:44.638090  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHPort
	I0210 13:20:44.638315  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:20:44.638528  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:20:44.638690  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHUsername
	I0210 13:20:44.638880  688914 main.go:141] libmachine: Using SSH client type: native
	I0210 13:20:44.639117  688914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 13:20:44.639136  688914 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-745712 && echo "old-k8s-version-745712" | sudo tee /etc/hostname
	I0210 13:20:44.763600  688914 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-745712
	
	I0210 13:20:44.763631  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHHostname
	I0210 13:20:44.766413  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:44.766739  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:20:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:20:44.766766  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:44.766907  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHPort
	I0210 13:20:44.767101  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:20:44.767310  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:20:44.767475  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHUsername
	I0210 13:20:44.767676  688914 main.go:141] libmachine: Using SSH client type: native
	I0210 13:20:44.767892  688914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 13:20:44.767910  688914 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-745712' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-745712/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-745712' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 13:20:44.890143  688914 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 13:20:44.890180  688914 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20383-625153/.minikube CaCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20383-625153/.minikube}
	I0210 13:20:44.890209  688914 buildroot.go:174] setting up certificates
	I0210 13:20:44.890222  688914 provision.go:84] configureAuth start
	I0210 13:20:44.890231  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetMachineName
	I0210 13:20:44.890512  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetIP
	I0210 13:20:44.893278  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:44.893643  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:20:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:20:44.893674  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:44.893789  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHHostname
	I0210 13:20:44.895876  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:44.896164  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:20:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:20:44.896222  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:44.896341  688914 provision.go:143] copyHostCerts
	I0210 13:20:44.896466  688914 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem, removing ...
	I0210 13:20:44.896484  688914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem
	I0210 13:20:44.896540  688914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem (1082 bytes)
	I0210 13:20:44.896637  688914 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem, removing ...
	I0210 13:20:44.896645  688914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem
	I0210 13:20:44.896666  688914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem (1123 bytes)
	I0210 13:20:44.896728  688914 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem, removing ...
	I0210 13:20:44.896735  688914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem
	I0210 13:20:44.896751  688914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem (1675 bytes)
	I0210 13:20:44.896809  688914 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-745712 san=[127.0.0.1 192.168.72.78 localhost minikube old-k8s-version-745712]
	I0210 13:20:45.000238  688914 provision.go:177] copyRemoteCerts
	I0210 13:20:45.000306  688914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 13:20:45.000335  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHHostname
	I0210 13:20:45.002810  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:45.003152  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:20:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:20:45.003181  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:45.003369  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHPort
	I0210 13:20:45.003548  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:20:45.003703  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHUsername
	I0210 13:20:45.003833  688914 sshutil.go:53] new ssh client: &{IP:192.168.72.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/old-k8s-version-745712/id_rsa Username:docker}
	I0210 13:20:45.087041  688914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 13:20:45.109755  688914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0210 13:20:45.135710  688914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 13:20:45.157768  688914 provision.go:87] duration metric: took 267.530684ms to configureAuth
	I0210 13:20:45.157796  688914 buildroot.go:189] setting minikube options for container-runtime
	I0210 13:20:45.157996  688914 config.go:182] Loaded profile config "old-k8s-version-745712": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0210 13:20:45.158098  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHHostname
	I0210 13:20:45.160758  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:45.161191  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:20:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:20:45.161229  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:45.161431  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHPort
	I0210 13:20:45.161602  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:20:45.161786  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:20:45.161945  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHUsername
	I0210 13:20:45.162114  688914 main.go:141] libmachine: Using SSH client type: native
	I0210 13:20:45.162289  688914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 13:20:45.162306  688914 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 13:20:45.399372  688914 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 13:20:45.399410  688914 machine.go:96] duration metric: took 881.351133ms to provisionDockerMachine
	I0210 13:20:45.399425  688914 start.go:293] postStartSetup for "old-k8s-version-745712" (driver="kvm2")
	I0210 13:20:45.399439  688914 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 13:20:45.399464  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .DriverName
	I0210 13:20:45.399861  688914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 13:20:45.399916  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHHostname
	I0210 13:20:45.402680  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:45.403007  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:20:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:20:45.403035  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:45.403206  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHPort
	I0210 13:20:45.403400  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:20:45.403585  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHUsername
	I0210 13:20:45.403739  688914 sshutil.go:53] new ssh client: &{IP:192.168.72.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/old-k8s-version-745712/id_rsa Username:docker}
	I0210 13:20:45.491345  688914 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 13:20:45.495867  688914 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 13:20:45.495893  688914 filesync.go:126] Scanning /home/jenkins/minikube-integration/20383-625153/.minikube/addons for local assets ...
	I0210 13:20:45.495947  688914 filesync.go:126] Scanning /home/jenkins/minikube-integration/20383-625153/.minikube/files for local assets ...
	I0210 13:20:45.496015  688914 filesync.go:149] local asset: /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem -> 6323522.pem in /etc/ssl/certs
	I0210 13:20:45.496099  688914 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 13:20:45.505290  688914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem --> /etc/ssl/certs/6323522.pem (1708 bytes)
	I0210 13:20:45.527789  688914 start.go:296] duration metric: took 128.34799ms for postStartSetup
	I0210 13:20:45.527837  688914 fix.go:56] duration metric: took 20.283453339s for fixHost
	I0210 13:20:45.527876  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHHostname
	I0210 13:20:45.530697  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:45.531073  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:20:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:20:45.531103  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:45.531279  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHPort
	I0210 13:20:45.531491  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:20:45.531654  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:20:45.531803  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHUsername
	I0210 13:20:45.531976  688914 main.go:141] libmachine: Using SSH client type: native
	I0210 13:20:45.532168  688914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 13:20:45.532194  688914 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 13:20:45.641642  688914 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739193645.615734259
	
	I0210 13:20:45.641670  688914 fix.go:216] guest clock: 1739193645.615734259
	I0210 13:20:45.641680  688914 fix.go:229] Guest: 2025-02-10 13:20:45.615734259 +0000 UTC Remote: 2025-02-10 13:20:45.527843031 +0000 UTC m=+20.436787240 (delta=87.891228ms)
	I0210 13:20:45.641727  688914 fix.go:200] guest clock delta is within tolerance: 87.891228ms
	I0210 13:20:45.641736  688914 start.go:83] releasing machines lock for "old-k8s-version-745712", held for 20.397370752s
	I0210 13:20:45.641763  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .DriverName
	I0210 13:20:45.642073  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetIP
	I0210 13:20:45.644918  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:45.645321  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:20:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:20:45.645363  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:45.645521  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .DriverName
	I0210 13:20:45.646024  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .DriverName
	I0210 13:20:45.646201  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .DriverName
	I0210 13:20:45.646287  688914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 13:20:45.646334  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHHostname
	I0210 13:20:45.646461  688914 ssh_runner.go:195] Run: cat /version.json
	I0210 13:20:45.646493  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHHostname
	I0210 13:20:45.649500  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:45.649671  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:45.649941  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:20:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:20:45.649973  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:45.650112  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:20:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:20:45.650124  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHPort
	I0210 13:20:45.650162  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:45.650318  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:20:45.650490  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHUsername
	I0210 13:20:45.650497  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHPort
	I0210 13:20:45.650649  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHKeyPath
	I0210 13:20:45.650700  688914 sshutil.go:53] new ssh client: &{IP:192.168.72.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/old-k8s-version-745712/id_rsa Username:docker}
	I0210 13:20:45.650805  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetSSHUsername
	I0210 13:20:45.650940  688914 sshutil.go:53] new ssh client: &{IP:192.168.72.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/old-k8s-version-745712/id_rsa Username:docker}
	I0210 13:20:45.752980  688914 ssh_runner.go:195] Run: systemctl --version
	I0210 13:20:45.758677  688914 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 13:20:45.900601  688914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 13:20:45.906396  688914 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 13:20:45.906461  688914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 13:20:45.922128  688914 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 13:20:45.922165  688914 start.go:495] detecting cgroup driver to use...
	I0210 13:20:45.922254  688914 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 13:20:45.938002  688914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 13:20:45.951674  688914 docker.go:217] disabling cri-docker service (if available) ...
	I0210 13:20:45.951741  688914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 13:20:45.964243  688914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 13:20:45.977529  688914 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 13:20:46.097263  688914 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 13:20:46.235763  688914 docker.go:233] disabling docker service ...
	I0210 13:20:46.235836  688914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 13:20:46.249867  688914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 13:20:46.262339  688914 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 13:20:46.398623  688914 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 13:20:46.509157  688914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 13:20:46.522244  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 13:20:46.540200  688914 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0210 13:20:46.540340  688914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:20:46.549751  688914 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 13:20:46.549813  688914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:20:46.560204  688914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:20:46.569906  688914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:20:46.579514  688914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 13:20:46.589233  688914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 13:20:46.597711  688914 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 13:20:46.597766  688914 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 13:20:46.609365  688914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 13:20:46.619120  688914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:20:46.726125  688914 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 13:20:46.818753  688914 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 13:20:46.818821  688914 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 13:20:46.823773  688914 start.go:563] Will wait 60s for crictl version
	I0210 13:20:46.823842  688914 ssh_runner.go:195] Run: which crictl
	I0210 13:20:46.827326  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 13:20:46.868564  688914 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 13:20:46.868654  688914 ssh_runner.go:195] Run: crio --version
	I0210 13:20:46.897715  688914 ssh_runner.go:195] Run: crio --version
	I0210 13:20:46.925489  688914 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0210 13:20:46.926659  688914 main.go:141] libmachine: (old-k8s-version-745712) Calling .GetIP
	I0210 13:20:46.929783  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:46.930220  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e4:89", ip: ""} in network mk-old-k8s-version-745712: {Iface:virbr2 ExpiryTime:2025-02-10 14:20:36 +0000 UTC Type:0 Mac:52:54:00:dd:e4:89 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-745712 Clientid:01:52:54:00:dd:e4:89}
	I0210 13:20:46.930252  688914 main.go:141] libmachine: (old-k8s-version-745712) DBG | domain old-k8s-version-745712 has defined IP address 192.168.72.78 and MAC address 52:54:00:dd:e4:89 in network mk-old-k8s-version-745712
	I0210 13:20:46.930500  688914 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0210 13:20:46.934607  688914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:20:46.948337  688914 kubeadm.go:883] updating cluster {Name:old-k8s-version-745712 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-745712 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.78 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 13:20:46.948495  688914 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 13:20:46.949000  688914 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:20:47.001702  688914 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0210 13:20:47.001770  688914 ssh_runner.go:195] Run: which lz4
	I0210 13:20:47.005688  688914 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 13:20:47.009438  688914 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 13:20:47.009468  688914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0210 13:20:48.442154  688914 crio.go:462] duration metric: took 1.436496966s to copy over tarball
	I0210 13:20:48.442251  688914 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 13:20:51.365235  688914 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.922944664s)
	I0210 13:20:51.365276  688914 crio.go:469] duration metric: took 2.923086589s to extract the tarball
	I0210 13:20:51.365286  688914 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 13:20:51.408475  688914 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:20:51.441869  688914 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0210 13:20:51.441907  688914 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0210 13:20:51.441992  688914 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:20:51.442047  688914 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0210 13:20:51.442060  688914 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:20:51.442025  688914 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:20:51.442101  688914 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0210 13:20:51.442054  688914 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:20:51.442020  688914 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0210 13:20:51.441992  688914 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:20:51.443486  688914 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:20:51.443534  688914 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:20:51.443633  688914 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:20:51.443635  688914 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:20:51.443541  688914 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0210 13:20:51.443641  688914 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0210 13:20:51.443808  688914 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0210 13:20:51.443663  688914 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:20:51.587725  688914 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:20:51.594826  688914 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0210 13:20:51.596278  688914 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:20:51.609520  688914 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0210 13:20:51.619010  688914 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0210 13:20:51.622431  688914 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:20:51.630824  688914 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:20:51.712067  688914 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0210 13:20:51.712122  688914 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:20:51.712170  688914 ssh_runner.go:195] Run: which crictl
	I0210 13:20:51.746696  688914 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0210 13:20:51.746731  688914 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0210 13:20:51.746745  688914 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:20:51.746763  688914 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0210 13:20:51.746807  688914 ssh_runner.go:195] Run: which crictl
	I0210 13:20:51.746817  688914 ssh_runner.go:195] Run: which crictl
	I0210 13:20:51.785855  688914 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0210 13:20:51.785901  688914 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0210 13:20:51.785859  688914 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0210 13:20:51.785901  688914 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0210 13:20:51.785955  688914 ssh_runner.go:195] Run: which crictl
	I0210 13:20:51.785973  688914 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:20:51.785972  688914 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0210 13:20:51.786016  688914 ssh_runner.go:195] Run: which crictl
	I0210 13:20:51.786030  688914 ssh_runner.go:195] Run: which crictl
	I0210 13:20:51.797898  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:20:51.797954  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 13:20:51.797972  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:20:51.798010  688914 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0210 13:20:51.798037  688914 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:20:51.798062  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 13:20:51.798071  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:20:51.798067  688914 ssh_runner.go:195] Run: which crictl
	I0210 13:20:51.798137  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 13:20:51.926685  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 13:20:51.926755  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:20:51.926870  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:20:51.927455  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 13:20:51.927589  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 13:20:51.927672  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:20:51.927704  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:20:52.077010  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 13:20:52.077067  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:20:52.077199  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 13:20:52.077233  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:20:52.077264  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:20:52.077363  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:20:52.085561  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 13:20:52.206660  688914 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0210 13:20:52.206786  688914 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0210 13:20:52.221576  688914 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0210 13:20:52.221592  688914 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0210 13:20:52.221592  688914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:20:52.221691  688914 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0210 13:20:52.221660  688914 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0210 13:20:52.257377  688914 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0210 13:20:52.442995  688914 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:20:52.581943  688914 cache_images.go:92] duration metric: took 1.140012679s to LoadCachedImages
	W0210 13:20:52.582095  688914 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20383-625153/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0210 13:20:52.582118  688914 kubeadm.go:934] updating node { 192.168.72.78 8443 v1.20.0 crio true true} ...
	I0210 13:20:52.582264  688914 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-745712 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-745712 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 13:20:52.582364  688914 ssh_runner.go:195] Run: crio config
	I0210 13:20:52.636741  688914 cni.go:84] Creating CNI manager for ""
	I0210 13:20:52.636769  688914 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:20:52.636779  688914 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 13:20:52.636804  688914 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.78 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-745712 NodeName:old-k8s-version-745712 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0210 13:20:52.636956  688914 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-745712"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 13:20:52.637032  688914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0210 13:20:52.647534  688914 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 13:20:52.647621  688914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 13:20:52.657123  688914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0210 13:20:52.672870  688914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 13:20:52.690044  688914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0210 13:20:52.707792  688914 ssh_runner.go:195] Run: grep 192.168.72.78	control-plane.minikube.internal$ /etc/hosts
	I0210 13:20:52.711620  688914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:20:52.723685  688914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:20:52.833877  688914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:20:52.851076  688914 certs.go:68] Setting up /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712 for IP: 192.168.72.78
	I0210 13:20:52.851104  688914 certs.go:194] generating shared ca certs ...
	I0210 13:20:52.851126  688914 certs.go:226] acquiring lock for ca certs: {Name:mkf2f72f82a6bd7e3c16bb224cd26b80c3c89e0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:20:52.851337  688914 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key
	I0210 13:20:52.851414  688914 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key
	I0210 13:20:52.851429  688914 certs.go:256] generating profile certs ...
	I0210 13:20:52.851561  688914 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/client.key
	I0210 13:20:52.851630  688914 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/apiserver.key.f20ca5cb
	I0210 13:20:52.851681  688914 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/proxy-client.key
	I0210 13:20:52.851846  688914 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352.pem (1338 bytes)
	W0210 13:20:52.851892  688914 certs.go:480] ignoring /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352_empty.pem, impossibly tiny 0 bytes
	I0210 13:20:52.851908  688914 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem (1675 bytes)
	I0210 13:20:52.851947  688914 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem (1082 bytes)
	I0210 13:20:52.851981  688914 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem (1123 bytes)
	I0210 13:20:52.852015  688914 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem (1675 bytes)
	I0210 13:20:52.852075  688914 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem (1708 bytes)
	I0210 13:20:52.852954  688914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 13:20:52.880850  688914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 13:20:52.911613  688914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 13:20:52.938746  688914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 13:20:52.966791  688914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0210 13:20:53.001613  688914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 13:20:53.033044  688914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 13:20:53.061275  688914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/old-k8s-version-745712/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0210 13:20:53.100071  688914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352.pem --> /usr/share/ca-certificates/632352.pem (1338 bytes)
	I0210 13:20:53.127492  688914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem --> /usr/share/ca-certificates/6323522.pem (1708 bytes)
	I0210 13:20:53.151325  688914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 13:20:53.177466  688914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 13:20:53.194956  688914 ssh_runner.go:195] Run: openssl version
	I0210 13:20:53.201769  688914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/632352.pem && ln -fs /usr/share/ca-certificates/632352.pem /etc/ssl/certs/632352.pem"
	I0210 13:20:53.213049  688914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/632352.pem
	I0210 13:20:53.217950  688914 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 12:13 /usr/share/ca-certificates/632352.pem
	I0210 13:20:53.218030  688914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/632352.pem
	I0210 13:20:53.224031  688914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/632352.pem /etc/ssl/certs/51391683.0"
	I0210 13:20:53.234469  688914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6323522.pem && ln -fs /usr/share/ca-certificates/6323522.pem /etc/ssl/certs/6323522.pem"
	I0210 13:20:53.245470  688914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6323522.pem
	I0210 13:20:53.249573  688914 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 12:13 /usr/share/ca-certificates/6323522.pem
	I0210 13:20:53.249637  688914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6323522.pem
	I0210 13:20:53.255528  688914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6323522.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 13:20:53.267021  688914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 13:20:53.279186  688914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:20:53.283786  688914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:20:53.283845  688914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:20:53.290039  688914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 13:20:53.301282  688914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 13:20:53.307006  688914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 13:20:53.313505  688914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 13:20:53.319493  688914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 13:20:53.325379  688914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 13:20:53.331529  688914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 13:20:53.337246  688914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 13:20:53.342441  688914 kubeadm.go:392] StartCluster: {Name:old-k8s-version-745712 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-745712 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.78 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:20:53.342533  688914 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 13:20:53.342587  688914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:20:53.387101  688914 cri.go:89] found id: ""
	I0210 13:20:53.387175  688914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 13:20:53.398632  688914 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0210 13:20:53.398664  688914 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0210 13:20:53.398719  688914 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0210 13:20:53.409589  688914 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0210 13:20:53.410740  688914 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-745712" does not appear in /home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:20:53.411564  688914 kubeconfig.go:62] /home/jenkins/minikube-integration/20383-625153/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-745712" cluster setting kubeconfig missing "old-k8s-version-745712" context setting]
	I0210 13:20:53.412748  688914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/kubeconfig: {Name:mke7ef1ff4ff1259856291979fdd0337df3a08b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:20:53.446271  688914 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0210 13:20:53.456537  688914 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.78
	I0210 13:20:53.456593  688914 kubeadm.go:1160] stopping kube-system containers ...
	I0210 13:20:53.456610  688914 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0210 13:20:53.456683  688914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:20:53.493416  688914 cri.go:89] found id: ""
	I0210 13:20:53.493491  688914 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0210 13:20:53.511960  688914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:20:53.522192  688914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:20:53.522219  688914 kubeadm.go:157] found existing configuration files:
	
	I0210 13:20:53.522308  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:20:53.532030  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:20:53.532089  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:20:53.541334  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:20:53.550101  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:20:53.550164  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:20:53.559683  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:20:53.568394  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:20:53.568472  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:20:53.578400  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:20:53.588692  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:20:53.588760  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:20:53.599233  688914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 13:20:53.609466  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:20:53.782430  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:20:54.450152  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:20:54.696071  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:20:54.790244  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:20:54.878822  688914 api_server.go:52] waiting for apiserver process to appear ...
	I0210 13:20:54.878927  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:20:55.379429  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:20:55.880019  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:20:56.379156  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:20:56.879206  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:20:57.379932  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:20:57.879861  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:20:58.378991  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:20:58.879093  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:20:59.379321  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:20:59.879305  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:00.379359  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:00.879247  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:01.379919  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:01.879669  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:02.379111  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:02.879562  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:03.379509  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:03.879019  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:04.379416  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:04.879206  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:05.379678  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:05.879663  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:06.379698  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:06.879288  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:07.379456  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:07.879442  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:08.379333  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:08.879163  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:09.379563  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:09.879560  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:10.379890  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:10.879107  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:11.379017  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:11.878996  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:12.379646  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:12.879897  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:13.379371  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:13.879330  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:14.379417  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:14.879150  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:15.379595  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:15.879464  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:16.379077  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:16.879116  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:17.379022  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:17.879484  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:18.379234  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:18.878966  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:19.379120  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:19.879214  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:20.379476  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:20.879404  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:21.379929  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:21.879955  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:22.379397  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:22.879444  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:23.379926  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:23.880026  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:24.379813  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:24.879192  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:25.379169  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:25.879831  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:26.379252  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:26.879965  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:27.378992  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:27.879975  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:28.378989  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:28.879511  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:29.379380  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:29.879981  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:30.378964  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:30.879994  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:31.379415  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:31.879138  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:32.379766  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:32.879995  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:33.379286  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:33.879667  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:34.379085  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:34.879140  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:35.379480  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:35.879063  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:36.379191  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:36.879848  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:37.378974  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:37.879844  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:38.379701  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:38.879331  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:39.380034  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:39.879253  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:40.379223  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:40.879670  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:41.379903  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:41.879622  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:42.379926  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:42.879118  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:43.379923  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:43.879608  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:44.379842  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:44.879129  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:45.379816  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:45.879040  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:46.379935  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:46.879184  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:47.378962  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:47.879723  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:48.379410  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:48.879968  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:49.379447  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:49.879057  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:50.379706  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:50.879896  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:51.379049  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:51.879952  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:52.379099  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:52.879016  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:53.379054  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:53.879430  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:54.379265  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:54.879955  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:21:54.880058  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:21:54.920245  688914 cri.go:89] found id: ""
	I0210 13:21:54.920274  688914 logs.go:282] 0 containers: []
	W0210 13:21:54.920282  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:21:54.920288  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:21:54.920341  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:21:54.954576  688914 cri.go:89] found id: ""
	I0210 13:21:54.954609  688914 logs.go:282] 0 containers: []
	W0210 13:21:54.954617  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:21:54.954623  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:21:54.954690  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:21:54.989501  688914 cri.go:89] found id: ""
	I0210 13:21:54.989537  688914 logs.go:282] 0 containers: []
	W0210 13:21:54.989548  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:21:54.989555  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:21:54.989634  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:21:55.030137  688914 cri.go:89] found id: ""
	I0210 13:21:55.030166  688914 logs.go:282] 0 containers: []
	W0210 13:21:55.030174  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:21:55.030180  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:21:55.030241  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:21:55.060806  688914 cri.go:89] found id: ""
	I0210 13:21:55.060831  688914 logs.go:282] 0 containers: []
	W0210 13:21:55.060839  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:21:55.060845  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:21:55.060910  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:21:55.093344  688914 cri.go:89] found id: ""
	I0210 13:21:55.093381  688914 logs.go:282] 0 containers: []
	W0210 13:21:55.093393  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:21:55.093402  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:21:55.093462  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:21:55.124530  688914 cri.go:89] found id: ""
	I0210 13:21:55.124572  688914 logs.go:282] 0 containers: []
	W0210 13:21:55.124581  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:21:55.124587  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:21:55.124650  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:21:55.158653  688914 cri.go:89] found id: ""
	I0210 13:21:55.158684  688914 logs.go:282] 0 containers: []
	W0210 13:21:55.158693  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:21:55.158703  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:21:55.158714  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:21:55.227611  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:21:55.227658  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:21:55.267829  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:21:55.267867  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:21:55.320275  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:21:55.320321  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:21:55.333831  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:21:55.333863  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:21:55.455855  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:21:57.957236  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:21:57.971116  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:21:57.971201  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:21:58.008341  688914 cri.go:89] found id: ""
	I0210 13:21:58.008369  688914 logs.go:282] 0 containers: []
	W0210 13:21:58.008377  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:21:58.008384  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:21:58.008442  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:21:58.043328  688914 cri.go:89] found id: ""
	I0210 13:21:58.043356  688914 logs.go:282] 0 containers: []
	W0210 13:21:58.043366  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:21:58.043374  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:21:58.043444  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:21:58.080606  688914 cri.go:89] found id: ""
	I0210 13:21:58.080638  688914 logs.go:282] 0 containers: []
	W0210 13:21:58.080649  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:21:58.080657  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:21:58.080719  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:21:58.113345  688914 cri.go:89] found id: ""
	I0210 13:21:58.113380  688914 logs.go:282] 0 containers: []
	W0210 13:21:58.113389  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:21:58.113396  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:21:58.113458  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:21:58.149483  688914 cri.go:89] found id: ""
	I0210 13:21:58.149510  688914 logs.go:282] 0 containers: []
	W0210 13:21:58.149522  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:21:58.149530  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:21:58.149585  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:21:58.191559  688914 cri.go:89] found id: ""
	I0210 13:21:58.191597  688914 logs.go:282] 0 containers: []
	W0210 13:21:58.191609  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:21:58.191618  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:21:58.191687  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:21:58.228995  688914 cri.go:89] found id: ""
	I0210 13:21:58.229034  688914 logs.go:282] 0 containers: []
	W0210 13:21:58.229047  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:21:58.229057  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:21:58.229159  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:21:58.274345  688914 cri.go:89] found id: ""
	I0210 13:21:58.274392  688914 logs.go:282] 0 containers: []
	W0210 13:21:58.274405  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:21:58.274421  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:21:58.274439  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:21:58.330393  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:21:58.330435  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:21:58.342910  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:21:58.342940  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:21:58.408837  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:21:58.408870  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:21:58.408889  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:21:58.486135  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:21:58.486177  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:22:01.024853  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:22:01.037753  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:22:01.037822  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:22:01.070325  688914 cri.go:89] found id: ""
	I0210 13:22:01.070359  688914 logs.go:282] 0 containers: []
	W0210 13:22:01.070371  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:22:01.070382  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:22:01.070453  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:22:01.103677  688914 cri.go:89] found id: ""
	I0210 13:22:01.103708  688914 logs.go:282] 0 containers: []
	W0210 13:22:01.103720  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:22:01.103727  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:22:01.103788  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:22:01.135869  688914 cri.go:89] found id: ""
	I0210 13:22:01.135897  688914 logs.go:282] 0 containers: []
	W0210 13:22:01.135907  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:22:01.135915  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:22:01.135984  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:22:01.171077  688914 cri.go:89] found id: ""
	I0210 13:22:01.171107  688914 logs.go:282] 0 containers: []
	W0210 13:22:01.171117  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:22:01.171125  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:22:01.171191  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:22:01.208822  688914 cri.go:89] found id: ""
	I0210 13:22:01.208855  688914 logs.go:282] 0 containers: []
	W0210 13:22:01.208866  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:22:01.208875  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:22:01.208943  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:22:01.246317  688914 cri.go:89] found id: ""
	I0210 13:22:01.246346  688914 logs.go:282] 0 containers: []
	W0210 13:22:01.246357  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:22:01.246367  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:22:01.246441  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:22:01.280386  688914 cri.go:89] found id: ""
	I0210 13:22:01.280421  688914 logs.go:282] 0 containers: []
	W0210 13:22:01.280433  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:22:01.280441  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:22:01.280522  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:22:01.319703  688914 cri.go:89] found id: ""
	I0210 13:22:01.319735  688914 logs.go:282] 0 containers: []
	W0210 13:22:01.319744  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:22:01.319754  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:22:01.319767  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:22:01.368738  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:22:01.368776  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:22:01.382927  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:22:01.382956  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:22:01.455256  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:22:01.455287  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:22:01.455304  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:22:01.531968  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:22:01.532013  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:22:04.075318  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:22:04.090002  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:22:04.090068  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:22:04.132811  688914 cri.go:89] found id: ""
	I0210 13:22:04.132842  688914 logs.go:282] 0 containers: []
	W0210 13:22:04.132850  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:22:04.132856  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:22:04.132921  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:22:04.166041  688914 cri.go:89] found id: ""
	I0210 13:22:04.166069  688914 logs.go:282] 0 containers: []
	W0210 13:22:04.166083  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:22:04.166088  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:22:04.166156  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:22:04.199985  688914 cri.go:89] found id: ""
	I0210 13:22:04.200015  688914 logs.go:282] 0 containers: []
	W0210 13:22:04.200028  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:22:04.200035  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:22:04.200089  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:22:04.232006  688914 cri.go:89] found id: ""
	I0210 13:22:04.232043  688914 logs.go:282] 0 containers: []
	W0210 13:22:04.232054  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:22:04.232063  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:22:04.232124  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:22:04.270605  688914 cri.go:89] found id: ""
	I0210 13:22:04.270637  688914 logs.go:282] 0 containers: []
	W0210 13:22:04.270646  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:22:04.270653  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:22:04.270708  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:22:04.303041  688914 cri.go:89] found id: ""
	I0210 13:22:04.303076  688914 logs.go:282] 0 containers: []
	W0210 13:22:04.303085  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:22:04.303092  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:22:04.303171  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:22:04.334764  688914 cri.go:89] found id: ""
	I0210 13:22:04.334795  688914 logs.go:282] 0 containers: []
	W0210 13:22:04.334805  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:22:04.334813  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:22:04.334880  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:22:04.367145  688914 cri.go:89] found id: ""
	I0210 13:22:04.367174  688914 logs.go:282] 0 containers: []
	W0210 13:22:04.367182  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:22:04.367193  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:22:04.367206  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:22:04.421421  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:22:04.421465  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:22:04.435615  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:22:04.435653  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:22:04.505827  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:22:04.505857  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:22:04.505869  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:22:04.585234  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:22:04.585286  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:22:07.131958  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:22:07.144777  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:22:07.144840  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:22:07.175981  688914 cri.go:89] found id: ""
	I0210 13:22:07.176008  688914 logs.go:282] 0 containers: []
	W0210 13:22:07.176016  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:22:07.176023  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:22:07.176083  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:22:07.213705  688914 cri.go:89] found id: ""
	I0210 13:22:07.213733  688914 logs.go:282] 0 containers: []
	W0210 13:22:07.213741  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:22:07.213746  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:22:07.213803  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:22:07.246255  688914 cri.go:89] found id: ""
	I0210 13:22:07.246283  688914 logs.go:282] 0 containers: []
	W0210 13:22:07.246291  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:22:07.246297  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:22:07.246362  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:22:07.276441  688914 cri.go:89] found id: ""
	I0210 13:22:07.276473  688914 logs.go:282] 0 containers: []
	W0210 13:22:07.276484  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:22:07.276498  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:22:07.276567  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:22:07.308208  688914 cri.go:89] found id: ""
	I0210 13:22:07.308241  688914 logs.go:282] 0 containers: []
	W0210 13:22:07.308253  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:22:07.308261  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:22:07.308323  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:22:07.339611  688914 cri.go:89] found id: ""
	I0210 13:22:07.339647  688914 logs.go:282] 0 containers: []
	W0210 13:22:07.339658  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:22:07.339667  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:22:07.339730  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:22:07.370477  688914 cri.go:89] found id: ""
	I0210 13:22:07.370506  688914 logs.go:282] 0 containers: []
	W0210 13:22:07.370517  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:22:07.370524  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:22:07.370583  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:22:07.402121  688914 cri.go:89] found id: ""
	I0210 13:22:07.402165  688914 logs.go:282] 0 containers: []
	W0210 13:22:07.402176  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:22:07.402190  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:22:07.402209  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:22:07.454750  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:22:07.454792  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:22:07.466995  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:22:07.467027  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:22:07.533399  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:22:07.533428  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:22:07.533446  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:22:07.607533  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:22:07.607575  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:22:10.143543  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:22:10.156127  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:22:10.156194  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:22:10.188966  688914 cri.go:89] found id: ""
	I0210 13:22:10.188994  688914 logs.go:282] 0 containers: []
	W0210 13:22:10.189002  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:22:10.189008  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:22:10.189064  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:22:10.224964  688914 cri.go:89] found id: ""
	I0210 13:22:10.224995  688914 logs.go:282] 0 containers: []
	W0210 13:22:10.225003  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:22:10.225008  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:22:10.225063  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:22:10.259620  688914 cri.go:89] found id: ""
	I0210 13:22:10.259649  688914 logs.go:282] 0 containers: []
	W0210 13:22:10.259657  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:22:10.259667  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:22:10.259719  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:22:10.293498  688914 cri.go:89] found id: ""
	I0210 13:22:10.293524  688914 logs.go:282] 0 containers: []
	W0210 13:22:10.293534  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:22:10.293542  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:22:10.293598  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:22:10.327794  688914 cri.go:89] found id: ""
	I0210 13:22:10.327822  688914 logs.go:282] 0 containers: []
	W0210 13:22:10.327830  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:22:10.327837  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:22:10.327892  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:22:10.362523  688914 cri.go:89] found id: ""
	I0210 13:22:10.362554  688914 logs.go:282] 0 containers: []
	W0210 13:22:10.362562  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:22:10.362569  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:22:10.362621  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:22:10.396672  688914 cri.go:89] found id: ""
	I0210 13:22:10.396708  688914 logs.go:282] 0 containers: []
	W0210 13:22:10.396719  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:22:10.396728  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:22:10.396787  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:22:10.428385  688914 cri.go:89] found id: ""
	I0210 13:22:10.428427  688914 logs.go:282] 0 containers: []
	W0210 13:22:10.428441  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:22:10.428454  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:22:10.428468  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:22:10.440460  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:22:10.440489  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:22:10.514632  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:22:10.514652  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:22:10.514668  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:22:10.592124  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:22:10.592176  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:22:10.635384  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:22:10.635424  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:22:13.187861  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:22:13.200878  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:22:13.200948  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:22:13.235842  688914 cri.go:89] found id: ""
	I0210 13:22:13.235870  688914 logs.go:282] 0 containers: []
	W0210 13:22:13.235878  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:22:13.235884  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:22:13.235938  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:22:13.267683  688914 cri.go:89] found id: ""
	I0210 13:22:13.267719  688914 logs.go:282] 0 containers: []
	W0210 13:22:13.267729  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:22:13.267737  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:22:13.267807  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:22:13.299670  688914 cri.go:89] found id: ""
	I0210 13:22:13.299702  688914 logs.go:282] 0 containers: []
	W0210 13:22:13.299712  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:22:13.299726  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:22:13.299792  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:22:13.331774  688914 cri.go:89] found id: ""
	I0210 13:22:13.331840  688914 logs.go:282] 0 containers: []
	W0210 13:22:13.331849  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:22:13.331855  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:22:13.331915  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:22:13.363351  688914 cri.go:89] found id: ""
	I0210 13:22:13.363381  688914 logs.go:282] 0 containers: []
	W0210 13:22:13.363402  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:22:13.363411  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:22:13.363482  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:22:13.394332  688914 cri.go:89] found id: ""
	I0210 13:22:13.394357  688914 logs.go:282] 0 containers: []
	W0210 13:22:13.394364  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:22:13.394370  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:22:13.394422  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:22:13.425126  688914 cri.go:89] found id: ""
	I0210 13:22:13.425159  688914 logs.go:282] 0 containers: []
	W0210 13:22:13.425171  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:22:13.425178  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:22:13.425256  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:22:13.462742  688914 cri.go:89] found id: ""
	I0210 13:22:13.462769  688914 logs.go:282] 0 containers: []
	W0210 13:22:13.462777  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:22:13.462787  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:22:13.462800  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:22:13.475421  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:22:13.475456  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:22:13.551328  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:22:13.551357  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:22:13.551374  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:22:13.622624  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:22:13.622671  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:22:13.661126  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:22:13.661161  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:22:16.213821  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:22:16.226813  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:22:16.226902  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:22:16.260697  688914 cri.go:89] found id: ""
	I0210 13:22:16.260733  688914 logs.go:282] 0 containers: []
	W0210 13:22:16.260753  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:22:16.260763  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:22:16.260836  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:22:16.293405  688914 cri.go:89] found id: ""
	I0210 13:22:16.293444  688914 logs.go:282] 0 containers: []
	W0210 13:22:16.293455  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:22:16.293463  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:22:16.293530  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:22:16.327918  688914 cri.go:89] found id: ""
	I0210 13:22:16.327954  688914 logs.go:282] 0 containers: []
	W0210 13:22:16.327965  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:22:16.327973  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:22:16.328042  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:22:16.359096  688914 cri.go:89] found id: ""
	I0210 13:22:16.359147  688914 logs.go:282] 0 containers: []
	W0210 13:22:16.359159  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:22:16.359168  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:22:16.359237  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:22:16.392394  688914 cri.go:89] found id: ""
	I0210 13:22:16.392420  688914 logs.go:282] 0 containers: []
	W0210 13:22:16.392427  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:22:16.392432  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:22:16.392490  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:22:16.424501  688914 cri.go:89] found id: ""
	I0210 13:22:16.424528  688914 logs.go:282] 0 containers: []
	W0210 13:22:16.424535  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:22:16.424541  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:22:16.424592  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:22:16.468676  688914 cri.go:89] found id: ""
	I0210 13:22:16.468708  688914 logs.go:282] 0 containers: []
	W0210 13:22:16.468719  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:22:16.468727  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:22:16.468797  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:22:16.504508  688914 cri.go:89] found id: ""
	I0210 13:22:16.504534  688914 logs.go:282] 0 containers: []
	W0210 13:22:16.504541  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:22:16.504550  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:22:16.504562  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:22:16.571930  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:22:16.572018  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:22:16.572042  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:22:16.653421  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:22:16.653462  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:22:16.690305  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:22:16.690340  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:22:16.741585  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:22:16.741631  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:22:19.257566  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:22:19.271001  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:22:19.271065  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:22:19.303718  688914 cri.go:89] found id: ""
	I0210 13:22:19.303756  688914 logs.go:282] 0 containers: []
	W0210 13:22:19.303767  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:22:19.303775  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:22:19.303840  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:22:19.333865  688914 cri.go:89] found id: ""
	I0210 13:22:19.333896  688914 logs.go:282] 0 containers: []
	W0210 13:22:19.333907  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:22:19.333929  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:22:19.333992  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:22:19.367828  688914 cri.go:89] found id: ""
	I0210 13:22:19.367853  688914 logs.go:282] 0 containers: []
	W0210 13:22:19.367862  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:22:19.367868  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:22:19.367918  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:22:19.398626  688914 cri.go:89] found id: ""
	I0210 13:22:19.398654  688914 logs.go:282] 0 containers: []
	W0210 13:22:19.398662  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:22:19.398668  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:22:19.398724  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:22:19.433113  688914 cri.go:89] found id: ""
	I0210 13:22:19.433145  688914 logs.go:282] 0 containers: []
	W0210 13:22:19.433153  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:22:19.433160  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:22:19.433213  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:22:19.464340  688914 cri.go:89] found id: ""
	I0210 13:22:19.464386  688914 logs.go:282] 0 containers: []
	W0210 13:22:19.464400  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:22:19.464409  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:22:19.464470  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:22:19.497876  688914 cri.go:89] found id: ""
	I0210 13:22:19.497908  688914 logs.go:282] 0 containers: []
	W0210 13:22:19.497917  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:22:19.497923  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:22:19.497973  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:22:19.529638  688914 cri.go:89] found id: ""
	I0210 13:22:19.529668  688914 logs.go:282] 0 containers: []
	W0210 13:22:19.529678  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:22:19.529699  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:22:19.529715  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:22:19.582310  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:22:19.582349  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:22:19.594969  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:22:19.594999  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:22:19.663747  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:22:19.663775  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:22:19.663788  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:22:19.742521  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:22:19.742566  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:22:22.282629  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:22:22.295943  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:22:22.296012  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:22:22.327834  688914 cri.go:89] found id: ""
	I0210 13:22:22.327865  688914 logs.go:282] 0 containers: []
	W0210 13:22:22.327873  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:22:22.327879  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:22:22.327930  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:22:22.360091  688914 cri.go:89] found id: ""
	I0210 13:22:22.360120  688914 logs.go:282] 0 containers: []
	W0210 13:22:22.360128  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:22:22.360134  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:22:22.360188  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:22:22.393833  688914 cri.go:89] found id: ""
	I0210 13:22:22.393859  688914 logs.go:282] 0 containers: []
	W0210 13:22:22.393866  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:22:22.393872  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:22:22.393936  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:22:22.426300  688914 cri.go:89] found id: ""
	I0210 13:22:22.426335  688914 logs.go:282] 0 containers: []
	W0210 13:22:22.426344  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:22:22.426351  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:22:22.426414  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:22:22.465984  688914 cri.go:89] found id: ""
	I0210 13:22:22.466012  688914 logs.go:282] 0 containers: []
	W0210 13:22:22.466023  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:22:22.466030  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:22:22.466094  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:22:22.498399  688914 cri.go:89] found id: ""
	I0210 13:22:22.498430  688914 logs.go:282] 0 containers: []
	W0210 13:22:22.498450  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:22:22.498459  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:22:22.498537  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:22:22.532036  688914 cri.go:89] found id: ""
	I0210 13:22:22.532069  688914 logs.go:282] 0 containers: []
	W0210 13:22:22.532077  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:22:22.532083  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:22:22.532137  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:22:22.565320  688914 cri.go:89] found id: ""
	I0210 13:22:22.565350  688914 logs.go:282] 0 containers: []
	W0210 13:22:22.565358  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:22:22.565368  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:22:22.565382  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:22:22.617527  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:22:22.617567  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:22:22.629810  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:22:22.629845  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:22:22.700450  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:22:22.700477  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:22:22.700490  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:22:22.773012  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:22:22.773045  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:22:25.309821  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:22:25.322645  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:22:25.322730  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:22:25.356586  688914 cri.go:89] found id: ""
	I0210 13:22:25.356626  688914 logs.go:282] 0 containers: []
	W0210 13:22:25.356638  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:22:25.356646  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:22:25.356714  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:22:25.389754  688914 cri.go:89] found id: ""
	I0210 13:22:25.389785  688914 logs.go:282] 0 containers: []
	W0210 13:22:25.389797  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:22:25.389805  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:22:25.389880  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:22:25.425580  688914 cri.go:89] found id: ""
	I0210 13:22:25.425618  688914 logs.go:282] 0 containers: []
	W0210 13:22:25.425630  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:22:25.425639  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:22:25.425703  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:22:25.459556  688914 cri.go:89] found id: ""
	I0210 13:22:25.459590  688914 logs.go:282] 0 containers: []
	W0210 13:22:25.459602  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:22:25.459612  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:22:25.459671  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:22:25.499213  688914 cri.go:89] found id: ""
	I0210 13:22:25.499248  688914 logs.go:282] 0 containers: []
	W0210 13:22:25.499258  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:22:25.499265  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:22:25.499329  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:22:25.536625  688914 cri.go:89] found id: ""
	I0210 13:22:25.536661  688914 logs.go:282] 0 containers: []
	W0210 13:22:25.536669  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:22:25.536676  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:22:25.536735  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:22:25.571497  688914 cri.go:89] found id: ""
	I0210 13:22:25.571532  688914 logs.go:282] 0 containers: []
	W0210 13:22:25.571542  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:22:25.571557  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:22:25.571623  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:22:25.608706  688914 cri.go:89] found id: ""
	I0210 13:22:25.608736  688914 logs.go:282] 0 containers: []
	W0210 13:22:25.608744  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:22:25.608757  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:22:25.608773  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:22:25.645744  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:22:25.645781  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:22:25.698262  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:22:25.698316  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:22:25.711488  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:22:25.711516  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:22:25.779334  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:22:25.779376  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:22:25.779392  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:22:28.354212  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:22:28.366905  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:22:28.366977  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:22:28.399476  688914 cri.go:89] found id: ""
	I0210 13:22:28.399511  688914 logs.go:282] 0 containers: []
	W0210 13:22:28.399523  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:22:28.399534  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:22:28.399605  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:22:28.430332  688914 cri.go:89] found id: ""
	I0210 13:22:28.430368  688914 logs.go:282] 0 containers: []
	W0210 13:22:28.430380  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:22:28.430389  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:22:28.430457  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:22:28.463179  688914 cri.go:89] found id: ""
	I0210 13:22:28.463207  688914 logs.go:282] 0 containers: []
	W0210 13:22:28.463218  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:22:28.463228  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:22:28.463297  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:22:28.494645  688914 cri.go:89] found id: ""
	I0210 13:22:28.494677  688914 logs.go:282] 0 containers: []
	W0210 13:22:28.494688  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:22:28.494697  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:22:28.494764  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:22:28.532928  688914 cri.go:89] found id: ""
	I0210 13:22:28.532959  688914 logs.go:282] 0 containers: []
	W0210 13:22:28.532969  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:22:28.532978  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:22:28.533051  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:22:28.568898  688914 cri.go:89] found id: ""
	I0210 13:22:28.568927  688914 logs.go:282] 0 containers: []
	W0210 13:22:28.568936  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:22:28.568944  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:22:28.569004  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:22:28.606606  688914 cri.go:89] found id: ""
	I0210 13:22:28.606640  688914 logs.go:282] 0 containers: []
	W0210 13:22:28.606652  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:22:28.606659  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:22:28.606728  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:22:28.645369  688914 cri.go:89] found id: ""
	I0210 13:22:28.645411  688914 logs.go:282] 0 containers: []
	W0210 13:22:28.645420  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:22:28.645429  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:22:28.645441  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:22:28.693936  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:22:28.693983  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:22:28.706873  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:22:28.706902  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:22:28.776326  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:22:28.776346  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:22:28.776360  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:22:28.851777  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:22:28.851820  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:22:31.388043  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:22:31.401213  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:22:31.401286  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:22:31.434048  688914 cri.go:89] found id: ""
	I0210 13:22:31.434076  688914 logs.go:282] 0 containers: []
	W0210 13:22:31.434087  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:22:31.434094  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:22:31.434165  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:22:31.464839  688914 cri.go:89] found id: ""
	I0210 13:22:31.464874  688914 logs.go:282] 0 containers: []
	W0210 13:22:31.464885  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:22:31.464893  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:22:31.464961  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:22:31.496149  688914 cri.go:89] found id: ""
	I0210 13:22:31.496187  688914 logs.go:282] 0 containers: []
	W0210 13:22:31.496197  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:22:31.496206  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:22:31.496275  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:22:31.528248  688914 cri.go:89] found id: ""
	I0210 13:22:31.528283  688914 logs.go:282] 0 containers: []
	W0210 13:22:31.528294  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:22:31.528303  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:22:31.528372  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:22:31.560505  688914 cri.go:89] found id: ""
	I0210 13:22:31.560535  688914 logs.go:282] 0 containers: []
	W0210 13:22:31.560547  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:22:31.560556  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:22:31.560624  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:22:31.591495  688914 cri.go:89] found id: ""
	I0210 13:22:31.591526  688914 logs.go:282] 0 containers: []
	W0210 13:22:31.591536  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:22:31.591545  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:22:31.591612  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:22:31.623545  688914 cri.go:89] found id: ""
	I0210 13:22:31.623575  688914 logs.go:282] 0 containers: []
	W0210 13:22:31.623586  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:22:31.623595  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:22:31.623663  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:22:31.656483  688914 cri.go:89] found id: ""
	I0210 13:22:31.656518  688914 logs.go:282] 0 containers: []
	W0210 13:22:31.656529  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:22:31.656548  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:22:31.656565  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:22:31.704909  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:22:31.704948  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:22:31.718022  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:22:31.718065  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:22:31.787676  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:22:31.787703  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:22:31.787716  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:22:31.860438  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:22:31.860476  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:22:34.398338  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:22:34.411261  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:22:34.411324  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:22:34.443799  688914 cri.go:89] found id: ""
	I0210 13:22:34.443836  688914 logs.go:282] 0 containers: []
	W0210 13:22:34.443847  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:22:34.443855  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:22:34.443932  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:22:34.482295  688914 cri.go:89] found id: ""
	I0210 13:22:34.482343  688914 logs.go:282] 0 containers: []
	W0210 13:22:34.482354  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:22:34.482361  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:22:34.482432  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:22:34.517571  688914 cri.go:89] found id: ""
	I0210 13:22:34.517599  688914 logs.go:282] 0 containers: []
	W0210 13:22:34.517607  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:22:34.517613  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:22:34.517668  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:22:34.552264  688914 cri.go:89] found id: ""
	I0210 13:22:34.552293  688914 logs.go:282] 0 containers: []
	W0210 13:22:34.552301  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:22:34.552308  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:22:34.552370  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:22:34.584819  688914 cri.go:89] found id: ""
	I0210 13:22:34.584846  688914 logs.go:282] 0 containers: []
	W0210 13:22:34.584853  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:22:34.584860  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:22:34.584923  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:22:34.617476  688914 cri.go:89] found id: ""
	I0210 13:22:34.617507  688914 logs.go:282] 0 containers: []
	W0210 13:22:34.617515  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:22:34.617523  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:22:34.617593  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:22:34.649755  688914 cri.go:89] found id: ""
	I0210 13:22:34.649785  688914 logs.go:282] 0 containers: []
	W0210 13:22:34.649793  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:22:34.649800  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:22:34.649855  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:22:34.680284  688914 cri.go:89] found id: ""
	I0210 13:22:34.680317  688914 logs.go:282] 0 containers: []
	W0210 13:22:34.680326  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:22:34.680337  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:22:34.680354  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:22:34.729883  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:22:34.729920  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:22:34.743208  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:22:34.743239  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:22:34.814461  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:22:34.814488  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:22:34.814504  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:22:34.886621  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:22:34.886676  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:22:37.433701  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:22:37.447747  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:22:37.447815  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:22:37.483277  688914 cri.go:89] found id: ""
	I0210 13:22:37.483319  688914 logs.go:282] 0 containers: []
	W0210 13:22:37.483333  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:22:37.483345  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:22:37.483411  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:22:37.516165  688914 cri.go:89] found id: ""
	I0210 13:22:37.516194  688914 logs.go:282] 0 containers: []
	W0210 13:22:37.516202  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:22:37.516208  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:22:37.516264  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:22:37.553004  688914 cri.go:89] found id: ""
	I0210 13:22:37.553031  688914 logs.go:282] 0 containers: []
	W0210 13:22:37.553038  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:22:37.553044  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:22:37.553097  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:22:37.585160  688914 cri.go:89] found id: ""
	I0210 13:22:37.585197  688914 logs.go:282] 0 containers: []
	W0210 13:22:37.585208  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:22:37.585215  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:22:37.585285  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:22:37.620159  688914 cri.go:89] found id: ""
	I0210 13:22:37.620191  688914 logs.go:282] 0 containers: []
	W0210 13:22:37.620202  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:22:37.620210  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:22:37.620277  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:22:37.654888  688914 cri.go:89] found id: ""
	I0210 13:22:37.654925  688914 logs.go:282] 0 containers: []
	W0210 13:22:37.654936  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:22:37.654945  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:22:37.655013  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:22:37.688598  688914 cri.go:89] found id: ""
	I0210 13:22:37.688627  688914 logs.go:282] 0 containers: []
	W0210 13:22:37.688635  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:22:37.688641  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:22:37.688695  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:22:37.719791  688914 cri.go:89] found id: ""
	I0210 13:22:37.719821  688914 logs.go:282] 0 containers: []
	W0210 13:22:37.719833  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:22:37.719847  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:22:37.719867  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:22:37.785149  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:22:37.785180  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:22:37.785197  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:22:37.875482  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:22:37.875517  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:22:37.922434  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:22:37.922472  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:22:37.973451  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:22:37.973491  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:22:40.487146  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:22:40.500417  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:22:40.500481  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:22:40.531921  688914 cri.go:89] found id: ""
	I0210 13:22:40.531949  688914 logs.go:282] 0 containers: []
	W0210 13:22:40.531957  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:22:40.531963  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:22:40.532017  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:22:40.562006  688914 cri.go:89] found id: ""
	I0210 13:22:40.562034  688914 logs.go:282] 0 containers: []
	W0210 13:22:40.562042  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:22:40.562048  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:22:40.562103  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:22:40.592192  688914 cri.go:89] found id: ""
	I0210 13:22:40.592234  688914 logs.go:282] 0 containers: []
	W0210 13:22:40.592245  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:22:40.592254  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:22:40.592318  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:22:40.623059  688914 cri.go:89] found id: ""
	I0210 13:22:40.623094  688914 logs.go:282] 0 containers: []
	W0210 13:22:40.623105  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:22:40.623113  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:22:40.623176  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:22:40.659923  688914 cri.go:89] found id: ""
	I0210 13:22:40.659953  688914 logs.go:282] 0 containers: []
	W0210 13:22:40.659960  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:22:40.659969  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:22:40.660028  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:22:40.691465  688914 cri.go:89] found id: ""
	I0210 13:22:40.691494  688914 logs.go:282] 0 containers: []
	W0210 13:22:40.691503  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:22:40.691512  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:22:40.691572  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:22:40.722386  688914 cri.go:89] found id: ""
	I0210 13:22:40.722418  688914 logs.go:282] 0 containers: []
	W0210 13:22:40.722428  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:22:40.722434  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:22:40.722493  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:22:40.753778  688914 cri.go:89] found id: ""
	I0210 13:22:40.753815  688914 logs.go:282] 0 containers: []
	W0210 13:22:40.753826  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:22:40.753840  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:22:40.753854  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:22:40.833890  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:22:40.833954  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:22:40.869787  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:22:40.869823  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:22:40.920122  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:22:40.920168  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:22:40.933562  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:22:40.933605  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:22:41.003630  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:22:43.504344  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:22:43.517247  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:22:43.517326  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:22:43.569399  688914 cri.go:89] found id: ""
	I0210 13:22:43.569425  688914 logs.go:282] 0 containers: []
	W0210 13:22:43.569434  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:22:43.569440  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:22:43.569508  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:22:43.599887  688914 cri.go:89] found id: ""
	I0210 13:22:43.599916  688914 logs.go:282] 0 containers: []
	W0210 13:22:43.599924  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:22:43.599930  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:22:43.599982  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:22:43.635202  688914 cri.go:89] found id: ""
	I0210 13:22:43.635233  688914 logs.go:282] 0 containers: []
	W0210 13:22:43.635242  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:22:43.635249  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:22:43.635306  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:22:43.669161  688914 cri.go:89] found id: ""
	I0210 13:22:43.669196  688914 logs.go:282] 0 containers: []
	W0210 13:22:43.669208  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:22:43.669219  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:22:43.669287  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:22:43.701551  688914 cri.go:89] found id: ""
	I0210 13:22:43.701578  688914 logs.go:282] 0 containers: []
	W0210 13:22:43.701586  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:22:43.701592  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:22:43.701659  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:22:43.734874  688914 cri.go:89] found id: ""
	I0210 13:22:43.734910  688914 logs.go:282] 0 containers: []
	W0210 13:22:43.734921  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:22:43.734930  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:22:43.735002  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:22:43.770728  688914 cri.go:89] found id: ""
	I0210 13:22:43.770760  688914 logs.go:282] 0 containers: []
	W0210 13:22:43.770771  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:22:43.770777  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:22:43.770828  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:22:43.803260  688914 cri.go:89] found id: ""
	I0210 13:22:43.803288  688914 logs.go:282] 0 containers: []
	W0210 13:22:43.803297  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:22:43.803307  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:22:43.803324  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:22:43.855433  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:22:43.855476  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:22:43.869000  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:22:43.869029  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:22:43.935803  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:22:43.935825  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:22:43.935838  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:22:44.009771  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:22:44.009816  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:22:46.554324  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:22:46.567396  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:22:46.567462  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:22:46.600654  688914 cri.go:89] found id: ""
	I0210 13:22:46.600683  688914 logs.go:282] 0 containers: []
	W0210 13:22:46.600694  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:22:46.600703  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:22:46.600762  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:22:46.634004  688914 cri.go:89] found id: ""
	I0210 13:22:46.634043  688914 logs.go:282] 0 containers: []
	W0210 13:22:46.634075  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:22:46.634084  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:22:46.634152  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:22:46.668667  688914 cri.go:89] found id: ""
	I0210 13:22:46.668703  688914 logs.go:282] 0 containers: []
	W0210 13:22:46.668715  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:22:46.668723  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:22:46.668779  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:22:46.699717  688914 cri.go:89] found id: ""
	I0210 13:22:46.699746  688914 logs.go:282] 0 containers: []
	W0210 13:22:46.699754  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:22:46.699760  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:22:46.699820  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:22:46.734342  688914 cri.go:89] found id: ""
	I0210 13:22:46.734376  688914 logs.go:282] 0 containers: []
	W0210 13:22:46.734395  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:22:46.734411  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:22:46.734481  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:22:46.768739  688914 cri.go:89] found id: ""
	I0210 13:22:46.768770  688914 logs.go:282] 0 containers: []
	W0210 13:22:46.768779  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:22:46.768786  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:22:46.768840  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:22:46.805669  688914 cri.go:89] found id: ""
	I0210 13:22:46.805700  688914 logs.go:282] 0 containers: []
	W0210 13:22:46.805710  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:22:46.805719  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:22:46.805771  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:22:46.842775  688914 cri.go:89] found id: ""
	I0210 13:22:46.842804  688914 logs.go:282] 0 containers: []
	W0210 13:22:46.842812  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:22:46.842822  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:22:46.842835  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:22:46.897203  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:22:46.897244  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:22:46.911287  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:22:46.911318  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:22:46.978810  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:22:46.978840  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:22:46.978857  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:22:47.057703  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:22:47.057743  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:22:49.598436  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:22:49.610738  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:22:49.610800  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:22:49.641464  688914 cri.go:89] found id: ""
	I0210 13:22:49.641491  688914 logs.go:282] 0 containers: []
	W0210 13:22:49.641499  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:22:49.641505  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:22:49.641555  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:22:49.673754  688914 cri.go:89] found id: ""
	I0210 13:22:49.673779  688914 logs.go:282] 0 containers: []
	W0210 13:22:49.673788  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:22:49.673793  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:22:49.673847  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:22:49.705555  688914 cri.go:89] found id: ""
	I0210 13:22:49.705592  688914 logs.go:282] 0 containers: []
	W0210 13:22:49.705628  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:22:49.705637  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:22:49.705720  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:22:49.739772  688914 cri.go:89] found id: ""
	I0210 13:22:49.739805  688914 logs.go:282] 0 containers: []
	W0210 13:22:49.739817  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:22:49.739826  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:22:49.739889  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:22:49.774876  688914 cri.go:89] found id: ""
	I0210 13:22:49.774909  688914 logs.go:282] 0 containers: []
	W0210 13:22:49.774920  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:22:49.774927  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:22:49.774984  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:22:49.806914  688914 cri.go:89] found id: ""
	I0210 13:22:49.806941  688914 logs.go:282] 0 containers: []
	W0210 13:22:49.806948  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:22:49.806954  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:22:49.807006  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:22:49.843657  688914 cri.go:89] found id: ""
	I0210 13:22:49.843690  688914 logs.go:282] 0 containers: []
	W0210 13:22:49.843701  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:22:49.843709  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:22:49.843778  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:22:49.877304  688914 cri.go:89] found id: ""
	I0210 13:22:49.877337  688914 logs.go:282] 0 containers: []
	W0210 13:22:49.877348  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:22:49.877362  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:22:49.877377  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:22:49.954568  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:22:49.954613  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:22:50.008886  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:22:50.008920  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:22:50.068047  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:22:50.068088  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:22:50.085428  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:22:50.085475  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:22:50.154404  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:22:52.656523  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:22:52.669626  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:22:52.669693  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:22:52.701762  688914 cri.go:89] found id: ""
	I0210 13:22:52.701788  688914 logs.go:282] 0 containers: []
	W0210 13:22:52.701796  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:22:52.701802  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:22:52.701855  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:22:52.737808  688914 cri.go:89] found id: ""
	I0210 13:22:52.737836  688914 logs.go:282] 0 containers: []
	W0210 13:22:52.737843  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:22:52.737849  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:22:52.737903  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:22:52.771424  688914 cri.go:89] found id: ""
	I0210 13:22:52.771459  688914 logs.go:282] 0 containers: []
	W0210 13:22:52.771470  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:22:52.771479  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:22:52.771552  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:22:52.802551  688914 cri.go:89] found id: ""
	I0210 13:22:52.802582  688914 logs.go:282] 0 containers: []
	W0210 13:22:52.802592  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:22:52.802598  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:22:52.802675  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:22:52.834554  688914 cri.go:89] found id: ""
	I0210 13:22:52.834577  688914 logs.go:282] 0 containers: []
	W0210 13:22:52.834585  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:22:52.834591  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:22:52.834645  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:22:52.866579  688914 cri.go:89] found id: ""
	I0210 13:22:52.866607  688914 logs.go:282] 0 containers: []
	W0210 13:22:52.866617  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:22:52.866625  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:22:52.866699  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:22:52.900918  688914 cri.go:89] found id: ""
	I0210 13:22:52.900956  688914 logs.go:282] 0 containers: []
	W0210 13:22:52.900970  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:22:52.900979  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:22:52.901049  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:22:52.936465  688914 cri.go:89] found id: ""
	I0210 13:22:52.936498  688914 logs.go:282] 0 containers: []
	W0210 13:22:52.936509  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:22:52.936523  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:22:52.936539  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:22:53.015361  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:22:53.015400  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:22:53.051429  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:22:53.051464  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:22:53.100358  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:22:53.100394  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:22:53.114217  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:22:53.114247  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:22:53.180149  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:22:55.680489  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:22:55.697946  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:22:55.698028  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:22:55.757490  688914 cri.go:89] found id: ""
	I0210 13:22:55.757526  688914 logs.go:282] 0 containers: []
	W0210 13:22:55.757538  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:22:55.757546  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:22:55.757616  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:22:55.799481  688914 cri.go:89] found id: ""
	I0210 13:22:55.799518  688914 logs.go:282] 0 containers: []
	W0210 13:22:55.799529  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:22:55.799538  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:22:55.799612  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:22:55.834161  688914 cri.go:89] found id: ""
	I0210 13:22:55.834192  688914 logs.go:282] 0 containers: []
	W0210 13:22:55.834203  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:22:55.834210  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:22:55.834277  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:22:55.864593  688914 cri.go:89] found id: ""
	I0210 13:22:55.864641  688914 logs.go:282] 0 containers: []
	W0210 13:22:55.864654  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:22:55.864672  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:22:55.864737  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:22:55.904747  688914 cri.go:89] found id: ""
	I0210 13:22:55.904791  688914 logs.go:282] 0 containers: []
	W0210 13:22:55.904804  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:22:55.904814  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:22:55.904893  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:22:55.939118  688914 cri.go:89] found id: ""
	I0210 13:22:55.939149  688914 logs.go:282] 0 containers: []
	W0210 13:22:55.939157  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:22:55.939164  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:22:55.939220  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:22:55.970617  688914 cri.go:89] found id: ""
	I0210 13:22:55.970651  688914 logs.go:282] 0 containers: []
	W0210 13:22:55.970660  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:22:55.970666  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:22:55.970721  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:22:56.007815  688914 cri.go:89] found id: ""
	I0210 13:22:56.007847  688914 logs.go:282] 0 containers: []
	W0210 13:22:56.007856  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:22:56.007865  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:22:56.007880  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:22:56.088577  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:22:56.088623  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:22:56.128623  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:22:56.128661  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:22:56.178371  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:22:56.178413  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:22:56.195345  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:22:56.195393  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:22:56.262344  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:22:58.764055  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:22:58.776863  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:22:58.776923  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:22:58.809605  688914 cri.go:89] found id: ""
	I0210 13:22:58.809635  688914 logs.go:282] 0 containers: []
	W0210 13:22:58.809646  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:22:58.809654  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:22:58.809724  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:22:58.846426  688914 cri.go:89] found id: ""
	I0210 13:22:58.846462  688914 logs.go:282] 0 containers: []
	W0210 13:22:58.846476  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:22:58.846486  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:22:58.846553  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:22:58.883039  688914 cri.go:89] found id: ""
	I0210 13:22:58.883067  688914 logs.go:282] 0 containers: []
	W0210 13:22:58.883077  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:22:58.883085  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:22:58.883150  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:22:58.917440  688914 cri.go:89] found id: ""
	I0210 13:22:58.917471  688914 logs.go:282] 0 containers: []
	W0210 13:22:58.917481  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:22:58.917488  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:22:58.917553  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:22:58.954429  688914 cri.go:89] found id: ""
	I0210 13:22:58.954453  688914 logs.go:282] 0 containers: []
	W0210 13:22:58.954460  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:22:58.954467  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:22:58.954528  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:22:58.991080  688914 cri.go:89] found id: ""
	I0210 13:22:58.991110  688914 logs.go:282] 0 containers: []
	W0210 13:22:58.991119  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:22:58.991128  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:22:58.991200  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:22:59.029853  688914 cri.go:89] found id: ""
	I0210 13:22:59.029886  688914 logs.go:282] 0 containers: []
	W0210 13:22:59.029898  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:22:59.029906  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:22:59.029967  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:22:59.068594  688914 cri.go:89] found id: ""
	I0210 13:22:59.068626  688914 logs.go:282] 0 containers: []
	W0210 13:22:59.068638  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:22:59.068652  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:22:59.068670  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:22:59.154387  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:22:59.154432  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:22:59.191546  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:22:59.191578  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:22:59.246652  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:22:59.246690  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:22:59.260269  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:22:59.260307  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:22:59.329742  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:23:01.830306  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:23:01.843349  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:23:01.843428  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:23:01.886458  688914 cri.go:89] found id: ""
	I0210 13:23:01.886489  688914 logs.go:282] 0 containers: []
	W0210 13:23:01.886501  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:23:01.886510  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:23:01.886576  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:23:01.927375  688914 cri.go:89] found id: ""
	I0210 13:23:01.927411  688914 logs.go:282] 0 containers: []
	W0210 13:23:01.927421  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:23:01.927429  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:23:01.927501  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:23:01.959861  688914 cri.go:89] found id: ""
	I0210 13:23:01.959891  688914 logs.go:282] 0 containers: []
	W0210 13:23:01.959903  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:23:01.959912  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:23:01.959988  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:23:01.997469  688914 cri.go:89] found id: ""
	I0210 13:23:01.997505  688914 logs.go:282] 0 containers: []
	W0210 13:23:01.997516  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:23:01.997525  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:23:01.997599  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:23:02.034320  688914 cri.go:89] found id: ""
	I0210 13:23:02.034352  688914 logs.go:282] 0 containers: []
	W0210 13:23:02.034362  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:23:02.034371  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:23:02.034440  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:23:02.065635  688914 cri.go:89] found id: ""
	I0210 13:23:02.065669  688914 logs.go:282] 0 containers: []
	W0210 13:23:02.065680  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:23:02.065689  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:23:02.065774  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:23:02.099296  688914 cri.go:89] found id: ""
	I0210 13:23:02.099328  688914 logs.go:282] 0 containers: []
	W0210 13:23:02.099339  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:23:02.099347  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:23:02.099416  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:23:02.133925  688914 cri.go:89] found id: ""
	I0210 13:23:02.133957  688914 logs.go:282] 0 containers: []
	W0210 13:23:02.133967  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:23:02.133983  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:23:02.133999  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:23:02.183123  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:23:02.183169  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:23:02.196908  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:23:02.196954  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:23:02.269797  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:23:02.269830  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:23:02.269844  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:23:02.352950  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:23:02.353003  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:23:04.897005  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:23:04.911010  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:23:04.911094  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:23:04.949440  688914 cri.go:89] found id: ""
	I0210 13:23:04.949479  688914 logs.go:282] 0 containers: []
	W0210 13:23:04.949490  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:23:04.949499  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:23:04.949572  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:23:04.987824  688914 cri.go:89] found id: ""
	I0210 13:23:04.987862  688914 logs.go:282] 0 containers: []
	W0210 13:23:04.987874  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:23:04.987886  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:23:04.987963  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:23:05.032334  688914 cri.go:89] found id: ""
	I0210 13:23:05.032370  688914 logs.go:282] 0 containers: []
	W0210 13:23:05.032380  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:23:05.032389  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:23:05.032446  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:23:05.071749  688914 cri.go:89] found id: ""
	I0210 13:23:05.071781  688914 logs.go:282] 0 containers: []
	W0210 13:23:05.071793  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:23:05.071802  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:23:05.071871  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:23:05.105650  688914 cri.go:89] found id: ""
	I0210 13:23:05.105685  688914 logs.go:282] 0 containers: []
	W0210 13:23:05.105694  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:23:05.105700  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:23:05.105766  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:23:05.142578  688914 cri.go:89] found id: ""
	I0210 13:23:05.142622  688914 logs.go:282] 0 containers: []
	W0210 13:23:05.142635  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:23:05.142645  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:23:05.142717  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:23:05.178563  688914 cri.go:89] found id: ""
	I0210 13:23:05.178592  688914 logs.go:282] 0 containers: []
	W0210 13:23:05.178600  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:23:05.178607  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:23:05.178663  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:23:05.217621  688914 cri.go:89] found id: ""
	I0210 13:23:05.217653  688914 logs.go:282] 0 containers: []
	W0210 13:23:05.217665  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:23:05.217684  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:23:05.217702  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:23:05.258268  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:23:05.258323  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:23:05.309662  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:23:05.309707  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:23:05.323603  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:23:05.323637  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:23:05.398001  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:23:05.398037  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:23:05.398053  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:23:07.977262  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:23:07.991495  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:23:07.991604  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:23:08.039451  688914 cri.go:89] found id: ""
	I0210 13:23:08.039488  688914 logs.go:282] 0 containers: []
	W0210 13:23:08.039499  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:23:08.039507  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:23:08.039581  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:23:08.083413  688914 cri.go:89] found id: ""
	I0210 13:23:08.083445  688914 logs.go:282] 0 containers: []
	W0210 13:23:08.083456  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:23:08.083463  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:23:08.083533  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:23:08.128518  688914 cri.go:89] found id: ""
	I0210 13:23:08.128562  688914 logs.go:282] 0 containers: []
	W0210 13:23:08.128571  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:23:08.128577  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:23:08.128640  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:23:08.167328  688914 cri.go:89] found id: ""
	I0210 13:23:08.167364  688914 logs.go:282] 0 containers: []
	W0210 13:23:08.167373  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:23:08.167379  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:23:08.167435  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:23:08.211420  688914 cri.go:89] found id: ""
	I0210 13:23:08.211468  688914 logs.go:282] 0 containers: []
	W0210 13:23:08.211482  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:23:08.211490  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:23:08.211570  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:23:08.247617  688914 cri.go:89] found id: ""
	I0210 13:23:08.247654  688914 logs.go:282] 0 containers: []
	W0210 13:23:08.247665  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:23:08.247673  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:23:08.247746  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:23:08.286516  688914 cri.go:89] found id: ""
	I0210 13:23:08.286559  688914 logs.go:282] 0 containers: []
	W0210 13:23:08.286571  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:23:08.286579  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:23:08.286657  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:23:08.328054  688914 cri.go:89] found id: ""
	I0210 13:23:08.328088  688914 logs.go:282] 0 containers: []
	W0210 13:23:08.328100  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:23:08.328113  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:23:08.328136  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:23:08.367281  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:23:08.367332  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:23:08.421940  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:23:08.421981  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:23:08.439361  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:23:08.439395  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:23:08.523122  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:23:08.523160  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:23:08.523177  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:23:11.107090  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:23:11.119686  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:23:11.119751  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:23:11.149292  688914 cri.go:89] found id: ""
	I0210 13:23:11.149318  688914 logs.go:282] 0 containers: []
	W0210 13:23:11.149326  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:23:11.149331  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:23:11.149381  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:23:11.180492  688914 cri.go:89] found id: ""
	I0210 13:23:11.180530  688914 logs.go:282] 0 containers: []
	W0210 13:23:11.180543  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:23:11.180552  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:23:11.180611  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:23:11.215620  688914 cri.go:89] found id: ""
	I0210 13:23:11.215653  688914 logs.go:282] 0 containers: []
	W0210 13:23:11.215665  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:23:11.215673  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:23:11.215745  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:23:11.247313  688914 cri.go:89] found id: ""
	I0210 13:23:11.247347  688914 logs.go:282] 0 containers: []
	W0210 13:23:11.247358  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:23:11.247365  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:23:11.247433  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:23:11.278465  688914 cri.go:89] found id: ""
	I0210 13:23:11.278497  688914 logs.go:282] 0 containers: []
	W0210 13:23:11.278507  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:23:11.278515  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:23:11.278582  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:23:11.318884  688914 cri.go:89] found id: ""
	I0210 13:23:11.318920  688914 logs.go:282] 0 containers: []
	W0210 13:23:11.318932  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:23:11.318940  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:23:11.319004  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:23:11.350730  688914 cri.go:89] found id: ""
	I0210 13:23:11.350767  688914 logs.go:282] 0 containers: []
	W0210 13:23:11.350779  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:23:11.350787  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:23:11.350845  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:23:11.389319  688914 cri.go:89] found id: ""
	I0210 13:23:11.389348  688914 logs.go:282] 0 containers: []
	W0210 13:23:11.389359  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:23:11.389408  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:23:11.389424  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:23:11.437115  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:23:11.437158  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:23:11.449479  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:23:11.449513  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:23:11.516641  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:23:11.516670  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:23:11.516690  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:23:11.593312  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:23:11.593351  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:23:14.132135  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:23:14.144497  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:23:14.144564  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:23:14.177274  688914 cri.go:89] found id: ""
	I0210 13:23:14.177312  688914 logs.go:282] 0 containers: []
	W0210 13:23:14.177325  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:23:14.177335  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:23:14.177419  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:23:14.214223  688914 cri.go:89] found id: ""
	I0210 13:23:14.214271  688914 logs.go:282] 0 containers: []
	W0210 13:23:14.214282  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:23:14.214288  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:23:14.214350  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:23:14.253543  688914 cri.go:89] found id: ""
	I0210 13:23:14.253571  688914 logs.go:282] 0 containers: []
	W0210 13:23:14.253585  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:23:14.253591  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:23:14.253648  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:23:14.290241  688914 cri.go:89] found id: ""
	I0210 13:23:14.290280  688914 logs.go:282] 0 containers: []
	W0210 13:23:14.290293  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:23:14.290304  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:23:14.290390  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:23:14.326976  688914 cri.go:89] found id: ""
	I0210 13:23:14.327008  688914 logs.go:282] 0 containers: []
	W0210 13:23:14.327017  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:23:14.327024  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:23:14.327078  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:23:14.364710  688914 cri.go:89] found id: ""
	I0210 13:23:14.364739  688914 logs.go:282] 0 containers: []
	W0210 13:23:14.364748  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:23:14.364754  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:23:14.364808  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:23:14.402479  688914 cri.go:89] found id: ""
	I0210 13:23:14.402514  688914 logs.go:282] 0 containers: []
	W0210 13:23:14.402526  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:23:14.402535  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:23:14.402595  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:23:14.439311  688914 cri.go:89] found id: ""
	I0210 13:23:14.439347  688914 logs.go:282] 0 containers: []
	W0210 13:23:14.439357  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:23:14.439374  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:23:14.439388  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:23:14.493272  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:23:14.493310  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:23:14.506287  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:23:14.506324  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:23:14.576130  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:23:14.576161  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:23:14.576185  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:23:14.649565  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:23:14.649610  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:23:17.186313  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:23:17.198474  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:23:17.198540  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:23:17.232712  688914 cri.go:89] found id: ""
	I0210 13:23:17.232737  688914 logs.go:282] 0 containers: []
	W0210 13:23:17.232749  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:23:17.232757  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:23:17.232810  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:23:17.268459  688914 cri.go:89] found id: ""
	I0210 13:23:17.268490  688914 logs.go:282] 0 containers: []
	W0210 13:23:17.268500  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:23:17.268511  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:23:17.268570  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:23:17.303904  688914 cri.go:89] found id: ""
	I0210 13:23:17.303938  688914 logs.go:282] 0 containers: []
	W0210 13:23:17.303946  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:23:17.303953  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:23:17.304018  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:23:17.342160  688914 cri.go:89] found id: ""
	I0210 13:23:17.342189  688914 logs.go:282] 0 containers: []
	W0210 13:23:17.342198  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:23:17.342204  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:23:17.342261  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:23:17.380279  688914 cri.go:89] found id: ""
	I0210 13:23:17.380309  688914 logs.go:282] 0 containers: []
	W0210 13:23:17.380320  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:23:17.380328  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:23:17.380400  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:23:17.412760  688914 cri.go:89] found id: ""
	I0210 13:23:17.412793  688914 logs.go:282] 0 containers: []
	W0210 13:23:17.412804  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:23:17.412813  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:23:17.412876  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:23:17.449783  688914 cri.go:89] found id: ""
	I0210 13:23:17.449818  688914 logs.go:282] 0 containers: []
	W0210 13:23:17.449829  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:23:17.449837  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:23:17.449913  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:23:17.487462  688914 cri.go:89] found id: ""
	I0210 13:23:17.487488  688914 logs.go:282] 0 containers: []
	W0210 13:23:17.487496  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:23:17.487505  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:23:17.487517  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:23:17.504170  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:23:17.504216  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:23:17.595201  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:23:17.595217  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:23:17.595230  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:23:17.676943  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:23:17.676983  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:23:17.719374  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:23:17.719410  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:23:20.279300  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:23:20.298527  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:23:20.298621  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:23:20.338388  688914 cri.go:89] found id: ""
	I0210 13:23:20.338423  688914 logs.go:282] 0 containers: []
	W0210 13:23:20.338434  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:23:20.338443  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:23:20.338511  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:23:20.380340  688914 cri.go:89] found id: ""
	I0210 13:23:20.380372  688914 logs.go:282] 0 containers: []
	W0210 13:23:20.380383  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:23:20.380390  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:23:20.380460  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:23:20.419776  688914 cri.go:89] found id: ""
	I0210 13:23:20.419811  688914 logs.go:282] 0 containers: []
	W0210 13:23:20.419822  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:23:20.419830  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:23:20.419900  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:23:20.464246  688914 cri.go:89] found id: ""
	I0210 13:23:20.464282  688914 logs.go:282] 0 containers: []
	W0210 13:23:20.464297  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:23:20.464306  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:23:20.464389  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:23:20.503421  688914 cri.go:89] found id: ""
	I0210 13:23:20.503458  688914 logs.go:282] 0 containers: []
	W0210 13:23:20.503472  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:23:20.503485  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:23:20.503556  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:23:20.547322  688914 cri.go:89] found id: ""
	I0210 13:23:20.547366  688914 logs.go:282] 0 containers: []
	W0210 13:23:20.547379  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:23:20.547389  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:23:20.547463  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:23:20.588466  688914 cri.go:89] found id: ""
	I0210 13:23:20.588516  688914 logs.go:282] 0 containers: []
	W0210 13:23:20.588528  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:23:20.588536  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:23:20.588612  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:23:20.629440  688914 cri.go:89] found id: ""
	I0210 13:23:20.629472  688914 logs.go:282] 0 containers: []
	W0210 13:23:20.629483  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:23:20.629497  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:23:20.629514  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:23:20.679841  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:23:20.679881  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:23:20.742648  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:23:20.742695  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:23:20.760108  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:23:20.760163  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:23:20.847340  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:23:20.847366  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:23:20.847383  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:23:23.444456  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:23:23.459497  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:23:23.459595  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:23:23.500432  688914 cri.go:89] found id: ""
	I0210 13:23:23.500471  688914 logs.go:282] 0 containers: []
	W0210 13:23:23.500482  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:23:23.500491  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:23:23.500560  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:23:23.535047  688914 cri.go:89] found id: ""
	I0210 13:23:23.535074  688914 logs.go:282] 0 containers: []
	W0210 13:23:23.535084  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:23:23.535093  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:23:23.535186  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:23:23.572534  688914 cri.go:89] found id: ""
	I0210 13:23:23.572558  688914 logs.go:282] 0 containers: []
	W0210 13:23:23.572565  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:23:23.572571  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:23:23.572623  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:23:23.607686  688914 cri.go:89] found id: ""
	I0210 13:23:23.607718  688914 logs.go:282] 0 containers: []
	W0210 13:23:23.607730  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:23:23.607737  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:23:23.607793  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:23:23.641399  688914 cri.go:89] found id: ""
	I0210 13:23:23.641434  688914 logs.go:282] 0 containers: []
	W0210 13:23:23.641446  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:23:23.641454  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:23:23.641519  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:23:23.673845  688914 cri.go:89] found id: ""
	I0210 13:23:23.673879  688914 logs.go:282] 0 containers: []
	W0210 13:23:23.673889  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:23:23.673898  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:23:23.673969  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:23:23.710727  688914 cri.go:89] found id: ""
	I0210 13:23:23.710776  688914 logs.go:282] 0 containers: []
	W0210 13:23:23.710799  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:23:23.710808  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:23:23.710887  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:23:23.749151  688914 cri.go:89] found id: ""
	I0210 13:23:23.749181  688914 logs.go:282] 0 containers: []
	W0210 13:23:23.749191  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:23:23.749203  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:23:23.749220  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:23:23.838640  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:23:23.838678  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:23:23.882544  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:23:23.882571  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:23:23.948425  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:23:23.948458  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:23:23.963503  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:23:23.963531  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:23:24.043630  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:23:26.544154  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:23:26.557856  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:23:26.557927  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:23:26.592114  688914 cri.go:89] found id: ""
	I0210 13:23:26.592159  688914 logs.go:282] 0 containers: []
	W0210 13:23:26.592173  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:23:26.592181  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:23:26.592247  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:23:26.629999  688914 cri.go:89] found id: ""
	I0210 13:23:26.630026  688914 logs.go:282] 0 containers: []
	W0210 13:23:26.630037  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:23:26.630045  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:23:26.630137  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:23:26.662978  688914 cri.go:89] found id: ""
	I0210 13:23:26.663011  688914 logs.go:282] 0 containers: []
	W0210 13:23:26.663023  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:23:26.663032  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:23:26.663101  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:23:26.699162  688914 cri.go:89] found id: ""
	I0210 13:23:26.699191  688914 logs.go:282] 0 containers: []
	W0210 13:23:26.699202  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:23:26.699210  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:23:26.699282  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:23:26.739308  688914 cri.go:89] found id: ""
	I0210 13:23:26.739338  688914 logs.go:282] 0 containers: []
	W0210 13:23:26.739349  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:23:26.739356  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:23:26.739425  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:23:26.782767  688914 cri.go:89] found id: ""
	I0210 13:23:26.782804  688914 logs.go:282] 0 containers: []
	W0210 13:23:26.782813  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:23:26.782821  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:23:26.782891  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:23:26.825090  688914 cri.go:89] found id: ""
	I0210 13:23:26.825156  688914 logs.go:282] 0 containers: []
	W0210 13:23:26.825169  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:23:26.825178  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:23:26.825252  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:23:26.865208  688914 cri.go:89] found id: ""
	I0210 13:23:26.865248  688914 logs.go:282] 0 containers: []
	W0210 13:23:26.865259  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:23:26.865271  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:23:26.865288  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:23:26.939253  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:23:26.939288  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:23:26.939311  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:23:27.046661  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:23:27.046707  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:23:27.089541  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:23:27.089590  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:23:27.169884  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:23:27.169943  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:23:29.685270  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:23:29.698567  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:23:29.698646  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:23:29.743387  688914 cri.go:89] found id: ""
	I0210 13:23:29.743421  688914 logs.go:282] 0 containers: []
	W0210 13:23:29.743432  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:23:29.743441  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:23:29.743529  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:23:29.786394  688914 cri.go:89] found id: ""
	I0210 13:23:29.786425  688914 logs.go:282] 0 containers: []
	W0210 13:23:29.786437  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:23:29.786445  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:23:29.786508  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:23:29.821512  688914 cri.go:89] found id: ""
	I0210 13:23:29.821546  688914 logs.go:282] 0 containers: []
	W0210 13:23:29.821558  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:23:29.821566  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:23:29.821637  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:23:29.861381  688914 cri.go:89] found id: ""
	I0210 13:23:29.861413  688914 logs.go:282] 0 containers: []
	W0210 13:23:29.861423  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:23:29.861431  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:23:29.861497  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:23:29.903006  688914 cri.go:89] found id: ""
	I0210 13:23:29.903044  688914 logs.go:282] 0 containers: []
	W0210 13:23:29.903057  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:23:29.903064  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:23:29.903149  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:23:29.937228  688914 cri.go:89] found id: ""
	I0210 13:23:29.937277  688914 logs.go:282] 0 containers: []
	W0210 13:23:29.937288  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:23:29.937296  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:23:29.937362  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:23:29.972497  688914 cri.go:89] found id: ""
	I0210 13:23:29.972537  688914 logs.go:282] 0 containers: []
	W0210 13:23:29.972550  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:23:29.972560  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:23:29.972636  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:23:30.009098  688914 cri.go:89] found id: ""
	I0210 13:23:30.009152  688914 logs.go:282] 0 containers: []
	W0210 13:23:30.009171  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:23:30.009183  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:23:30.009200  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:23:30.061448  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:23:30.061496  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:23:30.075263  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:23:30.075302  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:23:30.148230  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:23:30.148253  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:23:30.148266  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:23:30.243977  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:23:30.244024  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:23:32.827800  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:23:32.842004  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:23:32.842069  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:23:32.877032  688914 cri.go:89] found id: ""
	I0210 13:23:32.877058  688914 logs.go:282] 0 containers: []
	W0210 13:23:32.877066  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:23:32.877072  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:23:32.877152  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:23:32.911437  688914 cri.go:89] found id: ""
	I0210 13:23:32.911465  688914 logs.go:282] 0 containers: []
	W0210 13:23:32.911473  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:23:32.911480  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:23:32.911538  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:23:32.954385  688914 cri.go:89] found id: ""
	I0210 13:23:32.954419  688914 logs.go:282] 0 containers: []
	W0210 13:23:32.954430  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:23:32.954439  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:23:32.954506  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:23:32.996146  688914 cri.go:89] found id: ""
	I0210 13:23:32.996190  688914 logs.go:282] 0 containers: []
	W0210 13:23:32.996202  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:23:32.996212  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:23:32.996285  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:23:33.031258  688914 cri.go:89] found id: ""
	I0210 13:23:33.031291  688914 logs.go:282] 0 containers: []
	W0210 13:23:33.031300  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:23:33.031306  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:23:33.031366  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:23:33.092603  688914 cri.go:89] found id: ""
	I0210 13:23:33.092631  688914 logs.go:282] 0 containers: []
	W0210 13:23:33.092643  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:23:33.092654  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:23:33.092738  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:23:33.141402  688914 cri.go:89] found id: ""
	I0210 13:23:33.141430  688914 logs.go:282] 0 containers: []
	W0210 13:23:33.141441  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:23:33.141449  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:23:33.141507  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:23:33.183176  688914 cri.go:89] found id: ""
	I0210 13:23:33.183284  688914 logs.go:282] 0 containers: []
	W0210 13:23:33.183317  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:23:33.183332  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:23:33.183346  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:23:33.251560  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:23:33.251593  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:23:33.251610  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:23:33.328835  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:23:33.328865  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:23:33.369506  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:23:33.369536  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:23:33.426510  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:23:33.426544  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:23:35.943394  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:23:35.959715  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:23:35.959783  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:23:36.028691  688914 cri.go:89] found id: ""
	I0210 13:23:36.028718  688914 logs.go:282] 0 containers: []
	W0210 13:23:36.028726  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:23:36.028732  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:23:36.028788  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:23:36.061043  688914 cri.go:89] found id: ""
	I0210 13:23:36.061076  688914 logs.go:282] 0 containers: []
	W0210 13:23:36.061088  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:23:36.061095  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:23:36.061180  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:23:36.093459  688914 cri.go:89] found id: ""
	I0210 13:23:36.093498  688914 logs.go:282] 0 containers: []
	W0210 13:23:36.093515  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:23:36.093521  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:23:36.093586  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:23:36.125033  688914 cri.go:89] found id: ""
	I0210 13:23:36.125065  688914 logs.go:282] 0 containers: []
	W0210 13:23:36.125073  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:23:36.125079  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:23:36.125173  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:23:36.156969  688914 cri.go:89] found id: ""
	I0210 13:23:36.157003  688914 logs.go:282] 0 containers: []
	W0210 13:23:36.157012  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:23:36.157019  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:23:36.157072  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:23:36.191107  688914 cri.go:89] found id: ""
	I0210 13:23:36.191150  688914 logs.go:282] 0 containers: []
	W0210 13:23:36.191163  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:23:36.191172  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:23:36.191248  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:23:36.229554  688914 cri.go:89] found id: ""
	I0210 13:23:36.229587  688914 logs.go:282] 0 containers: []
	W0210 13:23:36.229599  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:23:36.229608  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:23:36.229671  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:23:36.267880  688914 cri.go:89] found id: ""
	I0210 13:23:36.267912  688914 logs.go:282] 0 containers: []
	W0210 13:23:36.267924  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:23:36.267938  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:23:36.267958  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:23:36.346350  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:23:36.346377  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:23:36.346390  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:23:36.434598  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:23:36.434653  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:23:36.478137  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:23:36.478176  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:23:36.533206  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:23:36.533239  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:23:39.048052  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:23:39.065510  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:23:39.065575  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:23:39.109759  688914 cri.go:89] found id: ""
	I0210 13:23:39.109792  688914 logs.go:282] 0 containers: []
	W0210 13:23:39.109805  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:23:39.109813  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:23:39.109883  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:23:39.146251  688914 cri.go:89] found id: ""
	I0210 13:23:39.146282  688914 logs.go:282] 0 containers: []
	W0210 13:23:39.146294  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:23:39.146301  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:23:39.146367  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:23:39.181482  688914 cri.go:89] found id: ""
	I0210 13:23:39.181521  688914 logs.go:282] 0 containers: []
	W0210 13:23:39.181533  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:23:39.181549  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:23:39.181623  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:23:39.215358  688914 cri.go:89] found id: ""
	I0210 13:23:39.215396  688914 logs.go:282] 0 containers: []
	W0210 13:23:39.215408  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:23:39.215417  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:23:39.215497  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:23:39.257137  688914 cri.go:89] found id: ""
	I0210 13:23:39.257173  688914 logs.go:282] 0 containers: []
	W0210 13:23:39.257182  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:23:39.257188  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:23:39.257262  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:23:39.292099  688914 cri.go:89] found id: ""
	I0210 13:23:39.292144  688914 logs.go:282] 0 containers: []
	W0210 13:23:39.292156  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:23:39.292165  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:23:39.292230  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:23:39.331695  688914 cri.go:89] found id: ""
	I0210 13:23:39.331725  688914 logs.go:282] 0 containers: []
	W0210 13:23:39.331736  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:23:39.331744  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:23:39.331809  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:23:39.370546  688914 cri.go:89] found id: ""
	I0210 13:23:39.370576  688914 logs.go:282] 0 containers: []
	W0210 13:23:39.370586  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:23:39.370598  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:23:39.370614  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:23:39.432312  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:23:39.432360  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:23:39.446187  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:23:39.446218  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:23:39.520243  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:23:39.520274  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:23:39.520288  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:23:39.595836  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:23:39.595873  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:23:42.133548  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:23:42.145582  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:23:42.145670  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:23:42.178195  688914 cri.go:89] found id: ""
	I0210 13:23:42.178222  688914 logs.go:282] 0 containers: []
	W0210 13:23:42.178230  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:23:42.178236  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:23:42.178289  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:23:42.210721  688914 cri.go:89] found id: ""
	I0210 13:23:42.210757  688914 logs.go:282] 0 containers: []
	W0210 13:23:42.210768  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:23:42.210777  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:23:42.210871  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:23:42.245179  688914 cri.go:89] found id: ""
	I0210 13:23:42.245216  688914 logs.go:282] 0 containers: []
	W0210 13:23:42.245225  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:23:42.245231  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:23:42.245297  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:23:42.277802  688914 cri.go:89] found id: ""
	I0210 13:23:42.277841  688914 logs.go:282] 0 containers: []
	W0210 13:23:42.277853  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:23:42.277861  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:23:42.277928  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:23:42.311774  688914 cri.go:89] found id: ""
	I0210 13:23:42.311807  688914 logs.go:282] 0 containers: []
	W0210 13:23:42.311819  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:23:42.311828  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:23:42.311890  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:23:42.344435  688914 cri.go:89] found id: ""
	I0210 13:23:42.344469  688914 logs.go:282] 0 containers: []
	W0210 13:23:42.344482  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:23:42.344491  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:23:42.344560  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:23:42.377537  688914 cri.go:89] found id: ""
	I0210 13:23:42.377568  688914 logs.go:282] 0 containers: []
	W0210 13:23:42.377577  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:23:42.377583  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:23:42.377635  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:23:42.410933  688914 cri.go:89] found id: ""
	I0210 13:23:42.410965  688914 logs.go:282] 0 containers: []
	W0210 13:23:42.410973  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:23:42.410984  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:23:42.410997  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:23:42.481280  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:23:42.481310  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:23:42.481328  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:23:42.577208  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:23:42.577253  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:23:42.617863  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:23:42.617892  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:23:42.682377  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:23:42.682419  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:23:45.201996  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:23:45.219941  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:23:45.220014  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:23:45.281036  688914 cri.go:89] found id: ""
	I0210 13:23:45.281063  688914 logs.go:282] 0 containers: []
	W0210 13:23:45.281073  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:23:45.281082  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:23:45.281167  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:23:45.337361  688914 cri.go:89] found id: ""
	I0210 13:23:45.337394  688914 logs.go:282] 0 containers: []
	W0210 13:23:45.337406  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:23:45.337415  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:23:45.337490  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:23:45.390570  688914 cri.go:89] found id: ""
	I0210 13:23:45.390605  688914 logs.go:282] 0 containers: []
	W0210 13:23:45.390615  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:23:45.390626  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:23:45.390705  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:23:45.429877  688914 cri.go:89] found id: ""
	I0210 13:23:45.429912  688914 logs.go:282] 0 containers: []
	W0210 13:23:45.429923  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:23:45.429931  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:23:45.430003  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:23:45.465603  688914 cri.go:89] found id: ""
	I0210 13:23:45.465631  688914 logs.go:282] 0 containers: []
	W0210 13:23:45.465638  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:23:45.465648  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:23:45.465706  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:23:45.509457  688914 cri.go:89] found id: ""
	I0210 13:23:45.509490  688914 logs.go:282] 0 containers: []
	W0210 13:23:45.509502  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:23:45.509515  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:23:45.509581  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:23:45.549762  688914 cri.go:89] found id: ""
	I0210 13:23:45.549796  688914 logs.go:282] 0 containers: []
	W0210 13:23:45.549806  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:23:45.549813  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:23:45.549873  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:23:45.602205  688914 cri.go:89] found id: ""
	I0210 13:23:45.602233  688914 logs.go:282] 0 containers: []
	W0210 13:23:45.602244  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:23:45.602255  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:23:45.602270  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:23:45.666395  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:23:45.666434  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:23:45.685197  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:23:45.685230  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:23:45.762180  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:23:45.762210  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:23:45.762230  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:23:45.849737  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:23:45.849770  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:23:48.401581  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:23:48.420242  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:23:48.420321  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:23:48.464494  688914 cri.go:89] found id: ""
	I0210 13:23:48.464529  688914 logs.go:282] 0 containers: []
	W0210 13:23:48.464540  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:23:48.464548  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:23:48.464609  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:23:48.501816  688914 cri.go:89] found id: ""
	I0210 13:23:48.501856  688914 logs.go:282] 0 containers: []
	W0210 13:23:48.501866  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:23:48.501874  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:23:48.501932  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:23:48.546670  688914 cri.go:89] found id: ""
	I0210 13:23:48.546702  688914 logs.go:282] 0 containers: []
	W0210 13:23:48.546712  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:23:48.546720  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:23:48.546787  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:23:48.591465  688914 cri.go:89] found id: ""
	I0210 13:23:48.591501  688914 logs.go:282] 0 containers: []
	W0210 13:23:48.591513  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:23:48.591523  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:23:48.591587  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:23:48.635138  688914 cri.go:89] found id: ""
	I0210 13:23:48.635176  688914 logs.go:282] 0 containers: []
	W0210 13:23:48.635194  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:23:48.635203  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:23:48.635271  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:23:48.678213  688914 cri.go:89] found id: ""
	I0210 13:23:48.678247  688914 logs.go:282] 0 containers: []
	W0210 13:23:48.678259  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:23:48.678267  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:23:48.678337  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:23:48.715068  688914 cri.go:89] found id: ""
	I0210 13:23:48.715099  688914 logs.go:282] 0 containers: []
	W0210 13:23:48.715110  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:23:48.715119  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:23:48.715198  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:23:48.752094  688914 cri.go:89] found id: ""
	I0210 13:23:48.752125  688914 logs.go:282] 0 containers: []
	W0210 13:23:48.752135  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:23:48.752149  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:23:48.752176  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:23:48.832825  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:23:48.832866  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:23:48.884137  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:23:48.884178  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:23:48.946874  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:23:48.946920  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:23:48.962782  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:23:48.962831  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:23:49.034553  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:23:51.537259  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:23:51.552819  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:23:51.552901  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:23:51.590667  688914 cri.go:89] found id: ""
	I0210 13:23:51.590699  688914 logs.go:282] 0 containers: []
	W0210 13:23:51.590711  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:23:51.590719  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:23:51.590783  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:23:51.636562  688914 cri.go:89] found id: ""
	I0210 13:23:51.636592  688914 logs.go:282] 0 containers: []
	W0210 13:23:51.636602  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:23:51.636610  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:23:51.636679  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:23:51.680627  688914 cri.go:89] found id: ""
	I0210 13:23:51.680655  688914 logs.go:282] 0 containers: []
	W0210 13:23:51.680666  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:23:51.680674  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:23:51.680745  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:23:51.721614  688914 cri.go:89] found id: ""
	I0210 13:23:51.721641  688914 logs.go:282] 0 containers: []
	W0210 13:23:51.721650  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:23:51.721656  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:23:51.721710  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:23:51.759367  688914 cri.go:89] found id: ""
	I0210 13:23:51.759402  688914 logs.go:282] 0 containers: []
	W0210 13:23:51.759415  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:23:51.759422  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:23:51.759502  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:23:51.795820  688914 cri.go:89] found id: ""
	I0210 13:23:51.795854  688914 logs.go:282] 0 containers: []
	W0210 13:23:51.795867  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:23:51.795875  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:23:51.795932  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:23:51.831976  688914 cri.go:89] found id: ""
	I0210 13:23:51.832015  688914 logs.go:282] 0 containers: []
	W0210 13:23:51.832028  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:23:51.832036  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:23:51.832108  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:23:51.868599  688914 cri.go:89] found id: ""
	I0210 13:23:51.868639  688914 logs.go:282] 0 containers: []
	W0210 13:23:51.868652  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:23:51.868665  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:23:51.868681  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:23:51.950475  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:23:51.950501  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:23:51.950519  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:23:52.036052  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:23:52.036106  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:23:52.090780  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:23:52.090821  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:23:52.157903  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:23:52.157946  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:23:54.678361  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:23:54.692310  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:23:54.692387  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:23:54.726581  688914 cri.go:89] found id: ""
	I0210 13:23:54.726619  688914 logs.go:282] 0 containers: []
	W0210 13:23:54.726630  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:23:54.726639  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:23:54.726711  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:23:54.768260  688914 cri.go:89] found id: ""
	I0210 13:23:54.768291  688914 logs.go:282] 0 containers: []
	W0210 13:23:54.768302  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:23:54.768310  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:23:54.768375  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:23:54.802533  688914 cri.go:89] found id: ""
	I0210 13:23:54.802562  688914 logs.go:282] 0 containers: []
	W0210 13:23:54.802570  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:23:54.802577  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:23:54.802628  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:23:54.838769  688914 cri.go:89] found id: ""
	I0210 13:23:54.838808  688914 logs.go:282] 0 containers: []
	W0210 13:23:54.838820  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:23:54.838829  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:23:54.838899  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:23:54.872084  688914 cri.go:89] found id: ""
	I0210 13:23:54.872130  688914 logs.go:282] 0 containers: []
	W0210 13:23:54.872140  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:23:54.872147  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:23:54.872213  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:23:54.904788  688914 cri.go:89] found id: ""
	I0210 13:23:54.904825  688914 logs.go:282] 0 containers: []
	W0210 13:23:54.904837  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:23:54.904845  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:23:54.904901  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:23:54.938477  688914 cri.go:89] found id: ""
	I0210 13:23:54.938505  688914 logs.go:282] 0 containers: []
	W0210 13:23:54.938515  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:23:54.938522  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:23:54.938573  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:23:54.975215  688914 cri.go:89] found id: ""
	I0210 13:23:54.975244  688914 logs.go:282] 0 containers: []
	W0210 13:23:54.975252  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:23:54.975267  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:23:54.975289  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:23:55.029092  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:23:55.029153  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:23:55.044880  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:23:55.044910  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:23:55.115691  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:23:55.115721  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:23:55.115739  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:23:55.194785  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:23:55.194828  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:23:57.742098  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:23:57.760607  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:23:57.760695  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:23:57.804783  688914 cri.go:89] found id: ""
	I0210 13:23:57.804818  688914 logs.go:282] 0 containers: []
	W0210 13:23:57.804829  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:23:57.804839  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:23:57.804904  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:23:57.839292  688914 cri.go:89] found id: ""
	I0210 13:23:57.839329  688914 logs.go:282] 0 containers: []
	W0210 13:23:57.839340  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:23:57.839348  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:23:57.839422  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:23:57.871610  688914 cri.go:89] found id: ""
	I0210 13:23:57.871645  688914 logs.go:282] 0 containers: []
	W0210 13:23:57.871657  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:23:57.871666  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:23:57.871725  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:23:57.901081  688914 cri.go:89] found id: ""
	I0210 13:23:57.901125  688914 logs.go:282] 0 containers: []
	W0210 13:23:57.901143  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:23:57.901151  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:23:57.901230  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:23:57.934387  688914 cri.go:89] found id: ""
	I0210 13:23:57.934420  688914 logs.go:282] 0 containers: []
	W0210 13:23:57.934431  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:23:57.934439  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:23:57.934516  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:23:57.967194  688914 cri.go:89] found id: ""
	I0210 13:23:57.967230  688914 logs.go:282] 0 containers: []
	W0210 13:23:57.967243  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:23:57.967252  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:23:57.967328  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:23:58.000300  688914 cri.go:89] found id: ""
	I0210 13:23:58.000333  688914 logs.go:282] 0 containers: []
	W0210 13:23:58.000344  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:23:58.000354  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:23:58.000419  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:23:58.037416  688914 cri.go:89] found id: ""
	I0210 13:23:58.037446  688914 logs.go:282] 0 containers: []
	W0210 13:23:58.037464  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:23:58.037476  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:23:58.037492  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:23:58.126079  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:23:58.126120  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:23:58.167618  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:23:58.167647  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:23:58.224559  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:23:58.224598  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:23:58.242762  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:23:58.242806  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:23:58.319029  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:00.820035  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:00.833048  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:00.833175  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:00.866615  688914 cri.go:89] found id: ""
	I0210 13:24:00.866650  688914 logs.go:282] 0 containers: []
	W0210 13:24:00.866662  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:00.866671  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:00.866741  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:00.909694  688914 cri.go:89] found id: ""
	I0210 13:24:00.909731  688914 logs.go:282] 0 containers: []
	W0210 13:24:00.909744  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:00.909753  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:00.909832  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:00.945661  688914 cri.go:89] found id: ""
	I0210 13:24:00.945695  688914 logs.go:282] 0 containers: []
	W0210 13:24:00.945704  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:00.945713  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:00.945781  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:00.980788  688914 cri.go:89] found id: ""
	I0210 13:24:00.980817  688914 logs.go:282] 0 containers: []
	W0210 13:24:00.980833  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:00.980841  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:00.980914  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:01.019334  688914 cri.go:89] found id: ""
	I0210 13:24:01.019382  688914 logs.go:282] 0 containers: []
	W0210 13:24:01.019402  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:01.019413  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:01.019487  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:01.067068  688914 cri.go:89] found id: ""
	I0210 13:24:01.067151  688914 logs.go:282] 0 containers: []
	W0210 13:24:01.067169  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:01.067180  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:01.067248  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:01.105099  688914 cri.go:89] found id: ""
	I0210 13:24:01.105154  688914 logs.go:282] 0 containers: []
	W0210 13:24:01.105166  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:01.105175  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:01.105244  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:01.141994  688914 cri.go:89] found id: ""
	I0210 13:24:01.142030  688914 logs.go:282] 0 containers: []
	W0210 13:24:01.142041  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:01.142055  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:01.142071  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:01.193437  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:01.193474  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:01.207118  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:01.207151  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:01.271699  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:01.271724  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:01.271741  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:01.343250  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:01.343294  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:03.882282  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:03.896336  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:03.896422  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:03.932509  688914 cri.go:89] found id: ""
	I0210 13:24:03.932541  688914 logs.go:282] 0 containers: []
	W0210 13:24:03.932552  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:03.932561  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:03.932629  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:03.966330  688914 cri.go:89] found id: ""
	I0210 13:24:03.966362  688914 logs.go:282] 0 containers: []
	W0210 13:24:03.966374  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:03.966381  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:03.966461  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:04.005938  688914 cri.go:89] found id: ""
	I0210 13:24:04.005968  688914 logs.go:282] 0 containers: []
	W0210 13:24:04.005979  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:04.005987  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:04.006062  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:04.043742  688914 cri.go:89] found id: ""
	I0210 13:24:04.043775  688914 logs.go:282] 0 containers: []
	W0210 13:24:04.043785  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:04.043791  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:04.043842  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:04.078794  688914 cri.go:89] found id: ""
	I0210 13:24:04.078824  688914 logs.go:282] 0 containers: []
	W0210 13:24:04.078834  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:04.078841  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:04.078906  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:04.115609  688914 cri.go:89] found id: ""
	I0210 13:24:04.115634  688914 logs.go:282] 0 containers: []
	W0210 13:24:04.115644  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:04.115652  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:04.115705  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:04.151555  688914 cri.go:89] found id: ""
	I0210 13:24:04.151581  688914 logs.go:282] 0 containers: []
	W0210 13:24:04.151591  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:04.151599  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:04.151662  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:04.189305  688914 cri.go:89] found id: ""
	I0210 13:24:04.189332  688914 logs.go:282] 0 containers: []
	W0210 13:24:04.189343  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:04.189356  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:04.189370  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:04.240414  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:04.240458  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:04.253321  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:04.253368  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:04.346626  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:04.346646  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:04.346661  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:04.433250  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:04.433283  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:06.977058  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:06.990358  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:06.990429  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:07.024264  688914 cri.go:89] found id: ""
	I0210 13:24:07.024291  688914 logs.go:282] 0 containers: []
	W0210 13:24:07.024299  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:07.024307  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:07.024369  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:07.058942  688914 cri.go:89] found id: ""
	I0210 13:24:07.058968  688914 logs.go:282] 0 containers: []
	W0210 13:24:07.058976  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:07.058982  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:07.059050  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:07.091593  688914 cri.go:89] found id: ""
	I0210 13:24:07.091621  688914 logs.go:282] 0 containers: []
	W0210 13:24:07.091629  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:07.091636  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:07.091696  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:07.122238  688914 cri.go:89] found id: ""
	I0210 13:24:07.122268  688914 logs.go:282] 0 containers: []
	W0210 13:24:07.122277  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:07.122284  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:07.122336  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:07.157046  688914 cri.go:89] found id: ""
	I0210 13:24:07.157077  688914 logs.go:282] 0 containers: []
	W0210 13:24:07.157088  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:07.157096  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:07.157180  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:07.189699  688914 cri.go:89] found id: ""
	I0210 13:24:07.189729  688914 logs.go:282] 0 containers: []
	W0210 13:24:07.189737  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:07.189743  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:07.189810  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:07.228778  688914 cri.go:89] found id: ""
	I0210 13:24:07.228808  688914 logs.go:282] 0 containers: []
	W0210 13:24:07.228816  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:07.228822  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:07.228889  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:07.262734  688914 cri.go:89] found id: ""
	I0210 13:24:07.262773  688914 logs.go:282] 0 containers: []
	W0210 13:24:07.262786  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:07.262799  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:07.262815  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:07.275851  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:07.275875  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:07.340215  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:07.340235  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:07.340247  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:07.413979  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:07.414014  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:07.452945  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:07.452978  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:10.003331  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:10.019580  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:10.019663  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:10.054175  688914 cri.go:89] found id: ""
	I0210 13:24:10.054210  688914 logs.go:282] 0 containers: []
	W0210 13:24:10.054222  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:10.054237  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:10.054303  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:10.089858  688914 cri.go:89] found id: ""
	I0210 13:24:10.089892  688914 logs.go:282] 0 containers: []
	W0210 13:24:10.089901  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:10.089908  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:10.089984  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:10.126303  688914 cri.go:89] found id: ""
	I0210 13:24:10.126351  688914 logs.go:282] 0 containers: []
	W0210 13:24:10.126371  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:10.126380  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:10.126471  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:10.163486  688914 cri.go:89] found id: ""
	I0210 13:24:10.163517  688914 logs.go:282] 0 containers: []
	W0210 13:24:10.163528  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:10.163536  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:10.163600  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:10.196984  688914 cri.go:89] found id: ""
	I0210 13:24:10.197016  688914 logs.go:282] 0 containers: []
	W0210 13:24:10.197025  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:10.197031  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:10.197094  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:10.229896  688914 cri.go:89] found id: ""
	I0210 13:24:10.229931  688914 logs.go:282] 0 containers: []
	W0210 13:24:10.229942  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:10.229950  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:10.230018  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:10.268959  688914 cri.go:89] found id: ""
	I0210 13:24:10.269001  688914 logs.go:282] 0 containers: []
	W0210 13:24:10.269013  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:10.269021  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:10.269077  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:10.323112  688914 cri.go:89] found id: ""
	I0210 13:24:10.323148  688914 logs.go:282] 0 containers: []
	W0210 13:24:10.323160  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:10.323173  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:10.323191  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:10.374059  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:10.374093  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:10.386231  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:10.386262  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:10.454702  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:10.454730  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:10.454746  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:10.532633  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:10.532666  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:13.069275  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:13.084593  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:13.084690  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:13.123344  688914 cri.go:89] found id: ""
	I0210 13:24:13.123380  688914 logs.go:282] 0 containers: []
	W0210 13:24:13.123391  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:13.123398  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:13.123454  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:13.170659  688914 cri.go:89] found id: ""
	I0210 13:24:13.170695  688914 logs.go:282] 0 containers: []
	W0210 13:24:13.170708  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:13.170716  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:13.170784  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:13.211450  688914 cri.go:89] found id: ""
	I0210 13:24:13.211482  688914 logs.go:282] 0 containers: []
	W0210 13:24:13.211493  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:13.211501  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:13.211570  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:13.244358  688914 cri.go:89] found id: ""
	I0210 13:24:13.244403  688914 logs.go:282] 0 containers: []
	W0210 13:24:13.244412  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:13.244419  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:13.244473  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:13.282077  688914 cri.go:89] found id: ""
	I0210 13:24:13.282116  688914 logs.go:282] 0 containers: []
	W0210 13:24:13.282135  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:13.282145  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:13.282224  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:13.330165  688914 cri.go:89] found id: ""
	I0210 13:24:13.330205  688914 logs.go:282] 0 containers: []
	W0210 13:24:13.330218  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:13.330229  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:13.330306  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:13.373815  688914 cri.go:89] found id: ""
	I0210 13:24:13.373854  688914 logs.go:282] 0 containers: []
	W0210 13:24:13.373875  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:13.373884  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:13.373955  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:13.414818  688914 cri.go:89] found id: ""
	I0210 13:24:13.414854  688914 logs.go:282] 0 containers: []
	W0210 13:24:13.414866  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:13.414881  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:13.414898  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:13.480327  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:13.480380  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:13.493798  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:13.493835  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:13.573021  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:13.573051  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:13.573068  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:13.651073  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:13.651114  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:16.190045  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:16.203302  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:16.203379  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:16.236678  688914 cri.go:89] found id: ""
	I0210 13:24:16.236711  688914 logs.go:282] 0 containers: []
	W0210 13:24:16.236724  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:16.236732  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:16.236800  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:16.279182  688914 cri.go:89] found id: ""
	I0210 13:24:16.279210  688914 logs.go:282] 0 containers: []
	W0210 13:24:16.279221  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:16.279228  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:16.279287  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:16.322409  688914 cri.go:89] found id: ""
	I0210 13:24:16.322439  688914 logs.go:282] 0 containers: []
	W0210 13:24:16.322449  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:16.322455  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:16.322507  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:16.362732  688914 cri.go:89] found id: ""
	I0210 13:24:16.362758  688914 logs.go:282] 0 containers: []
	W0210 13:24:16.362767  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:16.362774  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:16.362839  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:16.398870  688914 cri.go:89] found id: ""
	I0210 13:24:16.398893  688914 logs.go:282] 0 containers: []
	W0210 13:24:16.398900  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:16.398907  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:16.398961  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:16.436645  688914 cri.go:89] found id: ""
	I0210 13:24:16.436678  688914 logs.go:282] 0 containers: []
	W0210 13:24:16.436689  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:16.436697  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:16.436751  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:16.474895  688914 cri.go:89] found id: ""
	I0210 13:24:16.474921  688914 logs.go:282] 0 containers: []
	W0210 13:24:16.474930  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:16.474936  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:16.474979  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:16.507902  688914 cri.go:89] found id: ""
	I0210 13:24:16.507927  688914 logs.go:282] 0 containers: []
	W0210 13:24:16.507935  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:16.507944  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:16.507956  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:16.551404  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:16.551442  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:16.605861  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:16.605902  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:16.622457  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:16.622498  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:16.700551  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:16.700580  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:16.700600  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:19.294534  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:19.308767  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:19.308839  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:19.350578  688914 cri.go:89] found id: ""
	I0210 13:24:19.350613  688914 logs.go:282] 0 containers: []
	W0210 13:24:19.350624  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:19.350634  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:19.350709  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:19.386741  688914 cri.go:89] found id: ""
	I0210 13:24:19.386777  688914 logs.go:282] 0 containers: []
	W0210 13:24:19.386789  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:19.386797  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:19.386869  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:19.424645  688914 cri.go:89] found id: ""
	I0210 13:24:19.424674  688914 logs.go:282] 0 containers: []
	W0210 13:24:19.424686  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:19.424694  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:19.424772  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:19.462598  688914 cri.go:89] found id: ""
	I0210 13:24:19.462630  688914 logs.go:282] 0 containers: []
	W0210 13:24:19.462638  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:19.462644  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:19.462707  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:19.500411  688914 cri.go:89] found id: ""
	I0210 13:24:19.500443  688914 logs.go:282] 0 containers: []
	W0210 13:24:19.500454  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:19.500462  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:19.500537  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:19.537205  688914 cri.go:89] found id: ""
	I0210 13:24:19.537240  688914 logs.go:282] 0 containers: []
	W0210 13:24:19.537251  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:19.537259  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:19.537326  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:19.577510  688914 cri.go:89] found id: ""
	I0210 13:24:19.577545  688914 logs.go:282] 0 containers: []
	W0210 13:24:19.577555  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:19.577561  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:19.577615  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:19.612322  688914 cri.go:89] found id: ""
	I0210 13:24:19.612352  688914 logs.go:282] 0 containers: []
	W0210 13:24:19.612362  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:19.612376  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:19.612391  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:19.658436  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:19.658473  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:19.671021  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:19.671051  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:19.745218  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:19.745247  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:19.745262  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:19.823419  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:19.823460  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:22.358518  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:22.372004  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:22.372077  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:22.404990  688914 cri.go:89] found id: ""
	I0210 13:24:22.405019  688914 logs.go:282] 0 containers: []
	W0210 13:24:22.405030  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:22.405039  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:22.405144  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:22.441167  688914 cri.go:89] found id: ""
	I0210 13:24:22.441203  688914 logs.go:282] 0 containers: []
	W0210 13:24:22.441214  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:22.441223  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:22.441292  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:22.506176  688914 cri.go:89] found id: ""
	I0210 13:24:22.506210  688914 logs.go:282] 0 containers: []
	W0210 13:24:22.506220  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:22.506228  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:22.506290  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:22.557335  688914 cri.go:89] found id: ""
	I0210 13:24:22.557373  688914 logs.go:282] 0 containers: []
	W0210 13:24:22.557384  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:22.557392  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:22.557456  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:22.599130  688914 cri.go:89] found id: ""
	I0210 13:24:22.599162  688914 logs.go:282] 0 containers: []
	W0210 13:24:22.599170  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:22.599177  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:22.599247  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:22.631423  688914 cri.go:89] found id: ""
	I0210 13:24:22.631455  688914 logs.go:282] 0 containers: []
	W0210 13:24:22.631466  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:22.631474  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:22.631536  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:22.665719  688914 cri.go:89] found id: ""
	I0210 13:24:22.665752  688914 logs.go:282] 0 containers: []
	W0210 13:24:22.665763  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:22.665774  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:22.665843  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:22.707303  688914 cri.go:89] found id: ""
	I0210 13:24:22.707335  688914 logs.go:282] 0 containers: []
	W0210 13:24:22.707346  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:22.707359  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:22.707375  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:22.758655  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:22.758698  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:22.773322  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:22.773357  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:22.842220  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:22.842247  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:22.842262  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:22.935069  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:22.935111  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:25.483114  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:25.496531  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:25.496614  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:25.530326  688914 cri.go:89] found id: ""
	I0210 13:24:25.530364  688914 logs.go:282] 0 containers: []
	W0210 13:24:25.530377  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:25.530390  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:25.530457  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:25.565151  688914 cri.go:89] found id: ""
	I0210 13:24:25.565189  688914 logs.go:282] 0 containers: []
	W0210 13:24:25.565201  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:25.565209  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:25.565278  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:25.596680  688914 cri.go:89] found id: ""
	I0210 13:24:25.596708  688914 logs.go:282] 0 containers: []
	W0210 13:24:25.596716  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:25.596722  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:25.596778  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:25.633232  688914 cri.go:89] found id: ""
	I0210 13:24:25.633269  688914 logs.go:282] 0 containers: []
	W0210 13:24:25.633286  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:25.633293  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:25.633348  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:25.669890  688914 cri.go:89] found id: ""
	I0210 13:24:25.669922  688914 logs.go:282] 0 containers: []
	W0210 13:24:25.669933  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:25.669939  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:25.669994  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:25.701686  688914 cri.go:89] found id: ""
	I0210 13:24:25.701723  688914 logs.go:282] 0 containers: []
	W0210 13:24:25.701735  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:25.701743  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:25.701812  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:25.733412  688914 cri.go:89] found id: ""
	I0210 13:24:25.733447  688914 logs.go:282] 0 containers: []
	W0210 13:24:25.733459  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:25.733467  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:25.733532  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:25.766359  688914 cri.go:89] found id: ""
	I0210 13:24:25.766393  688914 logs.go:282] 0 containers: []
	W0210 13:24:25.766403  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:25.766427  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:25.766444  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:25.815274  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:25.815313  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:25.827593  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:25.827621  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:25.890686  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:25.890717  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:25.890733  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:25.962616  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:25.962657  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:28.509576  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:28.521973  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:28.522051  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:28.553554  688914 cri.go:89] found id: ""
	I0210 13:24:28.553589  688914 logs.go:282] 0 containers: []
	W0210 13:24:28.553598  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:28.553605  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:28.553659  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:28.584465  688914 cri.go:89] found id: ""
	I0210 13:24:28.584501  688914 logs.go:282] 0 containers: []
	W0210 13:24:28.584512  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:28.584520  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:28.584594  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:28.615697  688914 cri.go:89] found id: ""
	I0210 13:24:28.615736  688914 logs.go:282] 0 containers: []
	W0210 13:24:28.615752  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:28.615760  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:28.615833  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:28.649512  688914 cri.go:89] found id: ""
	I0210 13:24:28.649540  688914 logs.go:282] 0 containers: []
	W0210 13:24:28.649547  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:28.649553  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:28.649603  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:28.681569  688914 cri.go:89] found id: ""
	I0210 13:24:28.681611  688914 logs.go:282] 0 containers: []
	W0210 13:24:28.681624  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:28.681633  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:28.681706  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:28.718496  688914 cri.go:89] found id: ""
	I0210 13:24:28.718528  688914 logs.go:282] 0 containers: []
	W0210 13:24:28.718537  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:28.718543  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:28.718599  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:28.751632  688914 cri.go:89] found id: ""
	I0210 13:24:28.751670  688914 logs.go:282] 0 containers: []
	W0210 13:24:28.751682  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:28.751690  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:28.751756  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:28.784755  688914 cri.go:89] found id: ""
	I0210 13:24:28.784786  688914 logs.go:282] 0 containers: []
	W0210 13:24:28.784795  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:28.784805  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:28.784820  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:28.824686  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:28.824718  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:28.877464  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:28.877503  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:28.890733  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:28.890769  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:28.950427  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:28.950451  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:28.950468  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:31.527163  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:31.545818  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:31.545902  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:31.581413  688914 cri.go:89] found id: ""
	I0210 13:24:31.581448  688914 logs.go:282] 0 containers: []
	W0210 13:24:31.581460  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:31.581467  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:31.581528  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:31.615366  688914 cri.go:89] found id: ""
	I0210 13:24:31.615400  688914 logs.go:282] 0 containers: []
	W0210 13:24:31.615413  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:31.615427  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:31.615498  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:31.647910  688914 cri.go:89] found id: ""
	I0210 13:24:31.647944  688914 logs.go:282] 0 containers: []
	W0210 13:24:31.647955  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:31.647964  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:31.648036  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:31.687678  688914 cri.go:89] found id: ""
	I0210 13:24:31.687732  688914 logs.go:282] 0 containers: []
	W0210 13:24:31.687744  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:31.687753  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:31.687835  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:31.723614  688914 cri.go:89] found id: ""
	I0210 13:24:31.723643  688914 logs.go:282] 0 containers: []
	W0210 13:24:31.723651  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:31.723657  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:31.723712  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:31.761495  688914 cri.go:89] found id: ""
	I0210 13:24:31.761530  688914 logs.go:282] 0 containers: []
	W0210 13:24:31.761542  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:31.761550  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:31.761627  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:31.798791  688914 cri.go:89] found id: ""
	I0210 13:24:31.798826  688914 logs.go:282] 0 containers: []
	W0210 13:24:31.798837  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:31.798845  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:31.798909  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:31.832267  688914 cri.go:89] found id: ""
	I0210 13:24:31.832301  688914 logs.go:282] 0 containers: []
	W0210 13:24:31.832312  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:31.832324  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:31.832338  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:31.880327  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:31.880366  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:31.894846  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:31.894879  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:31.971865  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:31.971894  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:31.971910  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:32.044960  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:32.045003  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:34.591487  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:34.604353  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:34.604429  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:34.642995  688914 cri.go:89] found id: ""
	I0210 13:24:34.643027  688914 logs.go:282] 0 containers: []
	W0210 13:24:34.643038  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:34.643046  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:34.643116  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:34.676910  688914 cri.go:89] found id: ""
	I0210 13:24:34.676943  688914 logs.go:282] 0 containers: []
	W0210 13:24:34.676954  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:34.676960  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:34.677015  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:34.708668  688914 cri.go:89] found id: ""
	I0210 13:24:34.708700  688914 logs.go:282] 0 containers: []
	W0210 13:24:34.708707  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:34.708713  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:34.708765  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:34.745621  688914 cri.go:89] found id: ""
	I0210 13:24:34.745646  688914 logs.go:282] 0 containers: []
	W0210 13:24:34.745655  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:34.745661  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:34.745712  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:34.777510  688914 cri.go:89] found id: ""
	I0210 13:24:34.777538  688914 logs.go:282] 0 containers: []
	W0210 13:24:34.777550  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:34.777557  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:34.777627  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:34.808365  688914 cri.go:89] found id: ""
	I0210 13:24:34.808396  688914 logs.go:282] 0 containers: []
	W0210 13:24:34.808404  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:34.808413  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:34.808477  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:34.840284  688914 cri.go:89] found id: ""
	I0210 13:24:34.840315  688914 logs.go:282] 0 containers: []
	W0210 13:24:34.840326  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:34.840335  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:34.840394  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:34.877168  688914 cri.go:89] found id: ""
	I0210 13:24:34.877197  688914 logs.go:282] 0 containers: []
	W0210 13:24:34.877205  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:34.877224  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:34.877242  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:34.949894  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:34.949938  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:34.988689  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:34.988729  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:35.035403  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:35.035444  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:35.048790  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:35.048825  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:35.115128  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:37.617223  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:37.631174  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:37.631253  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:37.670822  688914 cri.go:89] found id: ""
	I0210 13:24:37.670857  688914 logs.go:282] 0 containers: []
	W0210 13:24:37.670870  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:37.670881  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:37.670945  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:37.706976  688914 cri.go:89] found id: ""
	I0210 13:24:37.707006  688914 logs.go:282] 0 containers: []
	W0210 13:24:37.707017  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:37.707025  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:37.707093  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:37.744808  688914 cri.go:89] found id: ""
	I0210 13:24:37.744839  688914 logs.go:282] 0 containers: []
	W0210 13:24:37.744848  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:37.744855  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:37.744910  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:37.781367  688914 cri.go:89] found id: ""
	I0210 13:24:37.781398  688914 logs.go:282] 0 containers: []
	W0210 13:24:37.781410  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:37.781421  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:37.781484  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:37.819116  688914 cri.go:89] found id: ""
	I0210 13:24:37.819148  688914 logs.go:282] 0 containers: []
	W0210 13:24:37.819157  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:37.819163  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:37.819232  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:37.852613  688914 cri.go:89] found id: ""
	I0210 13:24:37.852643  688914 logs.go:282] 0 containers: []
	W0210 13:24:37.852654  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:37.852663  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:37.852723  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:37.887524  688914 cri.go:89] found id: ""
	I0210 13:24:37.887553  688914 logs.go:282] 0 containers: []
	W0210 13:24:37.887562  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:37.887568  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:37.887619  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:37.919905  688914 cri.go:89] found id: ""
	I0210 13:24:37.919942  688914 logs.go:282] 0 containers: []
	W0210 13:24:37.919953  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:37.919967  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:37.919984  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:37.970552  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:37.970584  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:37.985616  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:37.985652  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:38.053251  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:38.053274  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:38.053291  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:38.123780  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:38.123818  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:40.661417  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:40.673492  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:40.673565  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:40.704651  688914 cri.go:89] found id: ""
	I0210 13:24:40.704682  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.704691  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:40.704698  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:40.704757  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:40.738312  688914 cri.go:89] found id: ""
	I0210 13:24:40.738340  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.738348  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:40.738355  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:40.738427  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:40.770358  688914 cri.go:89] found id: ""
	I0210 13:24:40.770392  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.770404  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:40.770413  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:40.770483  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:40.806743  688914 cri.go:89] found id: ""
	I0210 13:24:40.806777  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.806789  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:40.806797  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:40.806856  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:40.838580  688914 cri.go:89] found id: ""
	I0210 13:24:40.838614  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.838626  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:40.838643  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:40.838715  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:40.869410  688914 cri.go:89] found id: ""
	I0210 13:24:40.869441  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.869449  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:40.869456  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:40.869520  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:40.903978  688914 cri.go:89] found id: ""
	I0210 13:24:40.904005  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.904014  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:40.904019  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:40.904086  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:40.937376  688914 cri.go:89] found id: ""
	I0210 13:24:40.937408  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.937416  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:40.937426  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:40.937444  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:40.987586  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:40.987628  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:41.000596  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:41.000625  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:41.075352  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:41.075376  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:41.075396  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:41.155409  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:41.155441  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:43.696222  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:43.709019  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:43.709115  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:43.741277  688914 cri.go:89] found id: ""
	I0210 13:24:43.741309  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.741319  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:43.741328  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:43.741393  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:43.780217  688914 cri.go:89] found id: ""
	I0210 13:24:43.780248  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.780259  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:43.780267  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:43.780326  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:43.818627  688914 cri.go:89] found id: ""
	I0210 13:24:43.818660  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.818673  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:43.818681  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:43.818747  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:43.855216  688914 cri.go:89] found id: ""
	I0210 13:24:43.855248  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.855258  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:43.855266  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:43.855331  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:43.889360  688914 cri.go:89] found id: ""
	I0210 13:24:43.889394  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.889402  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:43.889410  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:43.889476  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:43.934224  688914 cri.go:89] found id: ""
	I0210 13:24:43.934258  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.934266  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:43.934273  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:43.934329  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:43.974800  688914 cri.go:89] found id: ""
	I0210 13:24:43.974830  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.974837  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:43.974844  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:43.974897  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:44.017085  688914 cri.go:89] found id: ""
	I0210 13:24:44.017128  688914 logs.go:282] 0 containers: []
	W0210 13:24:44.017139  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:44.017152  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:44.017171  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:44.067430  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:44.067470  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:44.081581  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:44.081618  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:44.153720  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:44.153743  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:44.153810  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:44.235557  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:44.235597  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:46.773208  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:46.785471  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:46.785541  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:46.819010  688914 cri.go:89] found id: ""
	I0210 13:24:46.819043  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.819053  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:46.819061  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:46.819125  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:46.851361  688914 cri.go:89] found id: ""
	I0210 13:24:46.851395  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.851408  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:46.851416  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:46.851489  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:46.887040  688914 cri.go:89] found id: ""
	I0210 13:24:46.887074  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.887086  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:46.887094  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:46.887159  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:46.919719  688914 cri.go:89] found id: ""
	I0210 13:24:46.919752  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.919763  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:46.919780  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:46.919854  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:46.962383  688914 cri.go:89] found id: ""
	I0210 13:24:46.962416  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.962429  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:46.962438  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:46.962510  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:46.997529  688914 cri.go:89] found id: ""
	I0210 13:24:46.997558  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.997567  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:46.997573  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:46.997624  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:47.034666  688914 cri.go:89] found id: ""
	I0210 13:24:47.034698  688914 logs.go:282] 0 containers: []
	W0210 13:24:47.034709  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:47.034717  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:47.034772  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:47.072750  688914 cri.go:89] found id: ""
	I0210 13:24:47.072780  688914 logs.go:282] 0 containers: []
	W0210 13:24:47.072788  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:47.072799  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:47.072811  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:47.126909  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:47.126946  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:47.139755  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:47.139783  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:47.207327  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:47.207369  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:47.207395  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:47.296476  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:47.296530  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:49.839781  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:49.852562  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:49.852630  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:49.887112  688914 cri.go:89] found id: ""
	I0210 13:24:49.887146  688914 logs.go:282] 0 containers: []
	W0210 13:24:49.887160  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:49.887179  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:49.887245  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:49.920850  688914 cri.go:89] found id: ""
	I0210 13:24:49.920878  688914 logs.go:282] 0 containers: []
	W0210 13:24:49.920885  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:49.920891  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:49.920944  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:49.950969  688914 cri.go:89] found id: ""
	I0210 13:24:49.951002  688914 logs.go:282] 0 containers: []
	W0210 13:24:49.951010  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:49.951017  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:49.951074  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:49.985312  688914 cri.go:89] found id: ""
	I0210 13:24:49.985341  688914 logs.go:282] 0 containers: []
	W0210 13:24:49.985350  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:49.985357  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:49.985420  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:50.022609  688914 cri.go:89] found id: ""
	I0210 13:24:50.022643  688914 logs.go:282] 0 containers: []
	W0210 13:24:50.022654  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:50.022662  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:50.022741  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:50.060874  688914 cri.go:89] found id: ""
	I0210 13:24:50.060910  688914 logs.go:282] 0 containers: []
	W0210 13:24:50.060921  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:50.060928  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:50.060995  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:50.105868  688914 cri.go:89] found id: ""
	I0210 13:24:50.105904  688914 logs.go:282] 0 containers: []
	W0210 13:24:50.105916  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:50.105924  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:50.105987  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:50.143929  688914 cri.go:89] found id: ""
	I0210 13:24:50.143961  688914 logs.go:282] 0 containers: []
	W0210 13:24:50.143980  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:50.143990  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:50.144006  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:50.205049  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:50.205092  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:50.224083  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:50.224118  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:50.291786  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:50.291812  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:50.291831  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:50.371326  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:50.371371  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:52.919235  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:52.937153  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:52.937253  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:52.969532  688914 cri.go:89] found id: ""
	I0210 13:24:52.969567  688914 logs.go:282] 0 containers: []
	W0210 13:24:52.969578  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:52.969586  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:52.969647  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:53.002238  688914 cri.go:89] found id: ""
	I0210 13:24:53.002269  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.002280  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:53.002287  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:53.002362  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:53.035346  688914 cri.go:89] found id: ""
	I0210 13:24:53.035376  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.035384  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:53.035392  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:53.035461  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:53.072805  688914 cri.go:89] found id: ""
	I0210 13:24:53.072897  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.072916  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:53.072926  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:53.073004  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:53.110660  688914 cri.go:89] found id: ""
	I0210 13:24:53.110691  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.110702  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:53.110712  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:53.110780  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:53.147192  688914 cri.go:89] found id: ""
	I0210 13:24:53.147222  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.147233  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:53.147242  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:53.147309  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:53.182225  688914 cri.go:89] found id: ""
	I0210 13:24:53.182260  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.182272  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:53.182280  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:53.182356  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:53.222558  688914 cri.go:89] found id: ""
	I0210 13:24:53.222590  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.222601  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:53.222614  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:53.222630  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:53.279358  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:53.279408  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:53.294748  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:53.294787  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:53.369719  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:53.369745  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:53.369762  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:53.451596  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:53.451639  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:55.993228  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:56.005645  688914 kubeadm.go:597] duration metric: took 4m2.60696863s to restartPrimaryControlPlane
	W0210 13:24:56.005721  688914 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0210 13:24:56.005746  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 13:24:56.513498  688914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:24:56.526951  688914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 13:24:56.536360  688914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:24:56.544989  688914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:24:56.545005  688914 kubeadm.go:157] found existing configuration files:
	
	I0210 13:24:56.545053  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:24:56.553248  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:24:56.553299  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:24:56.562196  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:24:56.570708  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:24:56.570756  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:24:56.580086  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:24:56.588161  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:24:56.588207  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:24:56.596487  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:24:56.604340  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:24:56.604385  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:24:56.612499  688914 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 13:24:56.823209  688914 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 13:26:52.767674  688914 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 13:26:52.767807  688914 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 13:26:52.769626  688914 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 13:26:52.769700  688914 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 13:26:52.769810  688914 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 13:26:52.769934  688914 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 13:26:52.770031  688914 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 13:26:52.770114  688914 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 13:26:52.771972  688914 out.go:235]   - Generating certificates and keys ...
	I0210 13:26:52.772065  688914 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 13:26:52.772157  688914 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 13:26:52.772272  688914 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 13:26:52.772338  688914 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 13:26:52.772402  688914 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 13:26:52.772464  688914 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 13:26:52.772523  688914 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 13:26:52.772581  688914 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 13:26:52.772660  688914 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 13:26:52.772734  688914 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 13:26:52.772770  688914 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 13:26:52.772822  688914 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 13:26:52.772867  688914 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 13:26:52.772917  688914 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 13:26:52.772974  688914 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 13:26:52.773022  688914 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 13:26:52.773151  688914 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 13:26:52.773258  688914 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 13:26:52.773305  688914 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 13:26:52.773386  688914 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 13:26:52.774698  688914 out.go:235]   - Booting up control plane ...
	I0210 13:26:52.774783  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 13:26:52.774853  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 13:26:52.774915  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 13:26:52.775002  688914 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 13:26:52.775179  688914 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 13:26:52.775244  688914 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 13:26:52.775340  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:26:52.775545  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:26:52.775613  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:26:52.775783  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:26:52.775841  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:26:52.776005  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:26:52.776090  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:26:52.776307  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:26:52.776424  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:26:52.776602  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:26:52.776616  688914 kubeadm.go:310] 
	I0210 13:26:52.776653  688914 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 13:26:52.776690  688914 kubeadm.go:310] 		timed out waiting for the condition
	I0210 13:26:52.776699  688914 kubeadm.go:310] 
	I0210 13:26:52.776733  688914 kubeadm.go:310] 	This error is likely caused by:
	I0210 13:26:52.776763  688914 kubeadm.go:310] 		- The kubelet is not running
	I0210 13:26:52.776850  688914 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 13:26:52.776856  688914 kubeadm.go:310] 
	I0210 13:26:52.776949  688914 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 13:26:52.776979  688914 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 13:26:52.777011  688914 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 13:26:52.777017  688914 kubeadm.go:310] 
	I0210 13:26:52.777134  688914 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 13:26:52.777239  688914 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 13:26:52.777252  688914 kubeadm.go:310] 
	I0210 13:26:52.777401  688914 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 13:26:52.777543  688914 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 13:26:52.777651  688914 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 13:26:52.777721  688914 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 13:26:52.777789  688914 kubeadm.go:310] 
	W0210 13:26:52.777852  688914 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0210 13:26:52.777903  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 13:26:58.074596  688914 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.296665584s)
	I0210 13:26:58.074683  688914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:26:58.091152  688914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:26:58.102648  688914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:26:58.102673  688914 kubeadm.go:157] found existing configuration files:
	
	I0210 13:26:58.102740  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:26:58.113654  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:26:58.113729  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:26:58.124863  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:26:58.135257  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:26:58.135321  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:26:58.145634  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:26:58.154591  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:26:58.154654  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:26:58.163835  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:26:58.172611  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:26:58.172679  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:26:58.182392  688914 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 13:26:58.251261  688914 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 13:26:58.251358  688914 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 13:26:58.383309  688914 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 13:26:58.383435  688914 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 13:26:58.383542  688914 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 13:26:58.550776  688914 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 13:26:58.552680  688914 out.go:235]   - Generating certificates and keys ...
	I0210 13:26:58.552793  688914 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 13:26:58.552881  688914 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 13:26:58.553007  688914 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 13:26:58.553091  688914 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 13:26:58.553226  688914 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 13:26:58.553329  688914 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 13:26:58.553420  688914 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 13:26:58.553525  688914 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 13:26:58.553642  688914 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 13:26:58.553774  688914 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 13:26:58.553837  688914 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 13:26:58.553918  688914 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 13:26:58.654826  688914 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 13:26:58.871525  688914 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 13:26:59.121959  688914 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 13:26:59.254004  688914 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 13:26:59.268822  688914 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 13:26:59.269202  688914 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 13:26:59.269279  688914 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 13:26:59.410011  688914 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 13:26:59.412184  688914 out.go:235]   - Booting up control plane ...
	I0210 13:26:59.412320  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 13:26:59.425128  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 13:26:59.426554  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 13:26:59.427605  688914 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 13:26:59.433353  688914 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 13:27:39.435230  688914 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 13:27:39.435410  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:27:39.435648  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:27:44.436555  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:27:44.436828  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:27:54.437160  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:27:54.437400  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:28:14.437678  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:28:14.437931  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:28:54.436979  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:28:54.437271  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:28:54.437281  688914 kubeadm.go:310] 
	I0210 13:28:54.437319  688914 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 13:28:54.437355  688914 kubeadm.go:310] 		timed out waiting for the condition
	I0210 13:28:54.437361  688914 kubeadm.go:310] 
	I0210 13:28:54.437390  688914 kubeadm.go:310] 	This error is likely caused by:
	I0210 13:28:54.437468  688914 kubeadm.go:310] 		- The kubelet is not running
	I0210 13:28:54.437614  688914 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 13:28:54.437628  688914 kubeadm.go:310] 
	I0210 13:28:54.437762  688914 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 13:28:54.437806  688914 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 13:28:54.437850  688914 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 13:28:54.437863  688914 kubeadm.go:310] 
	I0210 13:28:54.437986  688914 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 13:28:54.438064  688914 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 13:28:54.438084  688914 kubeadm.go:310] 
	I0210 13:28:54.438245  688914 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 13:28:54.438388  688914 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 13:28:54.438510  688914 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 13:28:54.438608  688914 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 13:28:54.438622  688914 kubeadm.go:310] 
	I0210 13:28:54.439017  688914 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 13:28:54.439094  688914 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 13:28:54.439183  688914 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 13:28:54.439220  688914 kubeadm.go:394] duration metric: took 8m1.096783715s to StartCluster
	I0210 13:28:54.439356  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:28:54.439446  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:28:54.481711  688914 cri.go:89] found id: ""
	I0210 13:28:54.481745  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.481753  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:28:54.481759  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:28:54.481826  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:28:54.515485  688914 cri.go:89] found id: ""
	I0210 13:28:54.515513  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.515521  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:28:54.515528  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:28:54.515585  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:28:54.565719  688914 cri.go:89] found id: ""
	I0210 13:28:54.565767  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.565779  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:28:54.565788  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:28:54.565864  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:28:54.597764  688914 cri.go:89] found id: ""
	I0210 13:28:54.597806  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.597814  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:28:54.597821  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:28:54.597888  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:28:54.631935  688914 cri.go:89] found id: ""
	I0210 13:28:54.631965  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.631975  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:28:54.631982  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:28:54.632052  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:28:54.664095  688914 cri.go:89] found id: ""
	I0210 13:28:54.664135  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.664147  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:28:54.664154  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:28:54.664213  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:28:54.695397  688914 cri.go:89] found id: ""
	I0210 13:28:54.695433  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.695445  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:28:54.695454  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:28:54.695522  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:28:54.732080  688914 cri.go:89] found id: ""
	I0210 13:28:54.732115  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.732127  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:28:54.732150  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:28:54.732163  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:28:54.838309  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:28:54.838352  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:28:54.876415  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:28:54.876444  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:28:54.925312  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:28:54.925353  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:28:54.938075  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:28:54.938108  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:28:55.007575  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0210 13:28:55.007606  688914 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0210 13:28:55.007664  688914 out.go:270] * 
	* 
	W0210 13:28:55.007737  688914 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 13:28:55.007760  688914 out.go:270] * 
	* 
	W0210 13:28:55.008646  688914 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 13:28:55.012559  688914 out.go:201] 
	W0210 13:28:55.013936  688914 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 13:28:55.013983  688914 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0210 13:28:55.014019  688914 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0210 13:28:55.015512  688914 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-745712 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-745712 -n old-k8s-version-745712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-745712 -n old-k8s-version-745712: exit status 2 (241.750559ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-745712 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | no-preload-112306 image list                           | no-preload-112306            | jenkins | v1.35.0 | 10 Feb 25 13:23 UTC | 10 Feb 25 13:23 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-112306                                   | no-preload-112306            | jenkins | v1.35.0 | 10 Feb 25 13:23 UTC | 10 Feb 25 13:23 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-112306                                   | no-preload-112306            | jenkins | v1.35.0 | 10 Feb 25 13:23 UTC | 10 Feb 25 13:23 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-112306                                   | no-preload-112306            | jenkins | v1.35.0 | 10 Feb 25 13:23 UTC | 10 Feb 25 13:23 UTC |
	| delete  | -p no-preload-112306                                   | no-preload-112306            | jenkins | v1.35.0 | 10 Feb 25 13:23 UTC | 10 Feb 25 13:23 UTC |
	| start   | -p newest-cni-078760 --memory=2200 --alsologtostderr   | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:23 UTC | 10 Feb 25 13:24 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | embed-certs-396582 image list                          | embed-certs-396582           | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-396582                                  | embed-certs-396582           | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-396582                                  | embed-certs-396582           | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-396582                                  | embed-certs-396582           | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	| delete  | -p embed-certs-396582                                  | embed-certs-396582           | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	| addons  | enable metrics-server -p newest-cni-078760             | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-078760                                   | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-078760                  | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-078760 --memory=2200 --alsologtostderr   | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:25 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-078760 image list                           | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:25 UTC | 10 Feb 25 13:25 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-078760                                   | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:25 UTC | 10 Feb 25 13:25 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-078760                                   | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:25 UTC | 10 Feb 25 13:25 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-078760                                   | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:25 UTC | 10 Feb 25 13:25 UTC |
	| delete  | -p newest-cni-078760                                   | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:25 UTC | 10 Feb 25 13:25 UTC |
	| image   | default-k8s-diff-port-957542                           | default-k8s-diff-port-957542 | jenkins | v1.35.0 | 10 Feb 25 13:28 UTC | 10 Feb 25 13:28 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-957542 | jenkins | v1.35.0 | 10 Feb 25 13:28 UTC | 10 Feb 25 13:28 UTC |
	|         | default-k8s-diff-port-957542                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-957542 | jenkins | v1.35.0 | 10 Feb 25 13:28 UTC | 10 Feb 25 13:28 UTC |
	|         | default-k8s-diff-port-957542                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-957542 | jenkins | v1.35.0 | 10 Feb 25 13:28 UTC | 10 Feb 25 13:28 UTC |
	|         | default-k8s-diff-port-957542                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-957542 | jenkins | v1.35.0 | 10 Feb 25 13:28 UTC | 10 Feb 25 13:28 UTC |
	|         | default-k8s-diff-port-957542                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 13:24:41
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 13:24:41.261359  691489 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:24:41.261536  691489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:24:41.261547  691489 out.go:358] Setting ErrFile to fd 2...
	I0210 13:24:41.261554  691489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:24:41.261746  691489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
	I0210 13:24:41.262302  691489 out.go:352] Setting JSON to false
	I0210 13:24:41.263380  691489 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":18431,"bootTime":1739175450,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 13:24:41.263451  691489 start.go:139] virtualization: kvm guest
	I0210 13:24:41.265793  691489 out.go:177] * [newest-cni-078760] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 13:24:41.267418  691489 notify.go:220] Checking for updates...
	I0210 13:24:41.267458  691489 out.go:177]   - MINIKUBE_LOCATION=20383
	I0210 13:24:41.268698  691489 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 13:24:41.270028  691489 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:24:41.271343  691489 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	I0210 13:24:41.272529  691489 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 13:24:41.273658  691489 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 13:24:41.275235  691489 config.go:182] Loaded profile config "newest-cni-078760": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:24:41.275676  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:24:41.275733  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:24:41.291098  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34559
	I0210 13:24:41.291639  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:24:41.292262  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:24:41.292292  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:24:41.292606  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:24:41.292771  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:24:41.292989  691489 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 13:24:41.293438  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:24:41.293515  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:24:41.308113  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38677
	I0210 13:24:41.308493  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:24:41.308908  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:24:41.308925  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:24:41.309289  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:24:41.309516  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:24:41.345364  691489 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 13:24:41.346519  691489 start.go:297] selected driver: kvm2
	I0210 13:24:41.346533  691489 start.go:901] validating driver "kvm2" against &{Name:newest-cni-078760 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-078760 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Net
work: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:24:41.346634  691489 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 13:24:41.347359  691489 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:24:41.347444  691489 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20383-625153/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 13:24:41.361853  691489 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 13:24:41.362275  691489 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0210 13:24:41.362308  691489 cni.go:84] Creating CNI manager for ""
	I0210 13:24:41.362373  691489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:24:41.362421  691489 start.go:340] cluster config:
	{Name:newest-cni-078760 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-078760 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:24:41.362555  691489 iso.go:125] acquiring lock: {Name:mk013d189757e85c0699014f5ef29205d9d4927f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:24:41.365015  691489 out.go:177] * Starting "newest-cni-078760" primary control-plane node in "newest-cni-078760" cluster
	I0210 13:24:41.366217  691489 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 13:24:41.366274  691489 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0210 13:24:41.366291  691489 cache.go:56] Caching tarball of preloaded images
	I0210 13:24:41.366377  691489 preload.go:172] Found /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 13:24:41.366391  691489 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0210 13:24:41.366538  691489 profile.go:143] Saving config to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/config.json ...
	I0210 13:24:41.366777  691489 start.go:360] acquireMachinesLock for newest-cni-078760: {Name:mk28e87da66de739a4c7c70d1fb5afc4ce31a4d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 13:24:41.366839  691489 start.go:364] duration metric: took 35.147µs to acquireMachinesLock for "newest-cni-078760"
	I0210 13:24:41.366859  691489 start.go:96] Skipping create...Using existing machine configuration
	I0210 13:24:41.366868  691489 fix.go:54] fixHost starting: 
	I0210 13:24:41.367244  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:24:41.367288  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:24:41.381304  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38575
	I0210 13:24:41.381768  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:24:41.382361  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:24:41.382386  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:24:41.382722  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:24:41.382913  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:24:41.383081  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetState
	I0210 13:24:41.385267  691489 fix.go:112] recreateIfNeeded on newest-cni-078760: state=Stopped err=<nil>
	I0210 13:24:41.385305  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	W0210 13:24:41.385473  691489 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 13:24:41.387457  691489 out.go:177] * Restarting existing kvm2 VM for "newest-cni-078760" ...
	I0210 13:24:39.769831  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:41.770142  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:40.661417  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:40.673492  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:40.673565  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:40.704651  688914 cri.go:89] found id: ""
	I0210 13:24:40.704682  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.704691  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:40.704698  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:40.704757  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:40.738312  688914 cri.go:89] found id: ""
	I0210 13:24:40.738340  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.738348  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:40.738355  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:40.738427  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:40.770358  688914 cri.go:89] found id: ""
	I0210 13:24:40.770392  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.770404  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:40.770413  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:40.770483  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:40.806743  688914 cri.go:89] found id: ""
	I0210 13:24:40.806777  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.806789  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:40.806797  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:40.806856  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:40.838580  688914 cri.go:89] found id: ""
	I0210 13:24:40.838614  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.838626  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:40.838643  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:40.838715  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:40.869410  688914 cri.go:89] found id: ""
	I0210 13:24:40.869441  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.869449  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:40.869456  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:40.869520  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:40.903978  688914 cri.go:89] found id: ""
	I0210 13:24:40.904005  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.904014  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:40.904019  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:40.904086  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:40.937376  688914 cri.go:89] found id: ""
	I0210 13:24:40.937408  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.937416  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:40.937426  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:40.937444  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:40.987586  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:40.987628  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:41.000596  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:41.000625  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:41.075352  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:41.075376  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:41.075396  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:41.155409  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:41.155441  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:43.696222  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:43.709019  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:43.709115  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:43.741277  688914 cri.go:89] found id: ""
	I0210 13:24:43.741309  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.741319  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:43.741328  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:43.741393  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:43.780217  688914 cri.go:89] found id: ""
	I0210 13:24:43.780248  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.780259  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:43.780267  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:43.780326  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:43.818627  688914 cri.go:89] found id: ""
	I0210 13:24:43.818660  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.818673  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:43.818681  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:43.818747  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:43.855216  688914 cri.go:89] found id: ""
	I0210 13:24:43.855248  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.855258  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:43.855266  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:43.855331  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:43.889360  688914 cri.go:89] found id: ""
	I0210 13:24:43.889394  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.889402  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:43.889410  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:43.889476  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:43.934224  688914 cri.go:89] found id: ""
	I0210 13:24:43.934258  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.934266  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:43.934273  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:43.934329  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:43.974800  688914 cri.go:89] found id: ""
	I0210 13:24:43.974830  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.974837  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:43.974844  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:43.974897  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:44.017085  688914 cri.go:89] found id: ""
	I0210 13:24:44.017128  688914 logs.go:282] 0 containers: []
	W0210 13:24:44.017139  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:44.017152  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:44.017171  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:44.067430  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:44.067470  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:44.081581  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:44.081618  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:44.153720  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:44.153743  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:44.153810  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:44.235557  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:44.235597  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:41.388557  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Start
	I0210 13:24:41.388729  691489 main.go:141] libmachine: (newest-cni-078760) starting domain...
	I0210 13:24:41.388749  691489 main.go:141] libmachine: (newest-cni-078760) ensuring networks are active...
	I0210 13:24:41.389682  691489 main.go:141] libmachine: (newest-cni-078760) Ensuring network default is active
	I0210 13:24:41.390063  691489 main.go:141] libmachine: (newest-cni-078760) Ensuring network mk-newest-cni-078760 is active
	I0210 13:24:41.390463  691489 main.go:141] libmachine: (newest-cni-078760) getting domain XML...
	I0210 13:24:41.391221  691489 main.go:141] libmachine: (newest-cni-078760) creating domain...
	I0210 13:24:42.616334  691489 main.go:141] libmachine: (newest-cni-078760) waiting for IP...
	I0210 13:24:42.617299  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:42.617829  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:42.617918  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:42.617824  691524 retry.go:31] will retry after 283.264685ms: waiting for domain to come up
	I0210 13:24:42.903325  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:42.904000  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:42.904028  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:42.903933  691524 retry.go:31] will retry after 344.515197ms: waiting for domain to come up
	I0210 13:24:43.250750  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:43.251374  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:43.251425  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:43.251339  691524 retry.go:31] will retry after 393.453533ms: waiting for domain to come up
	I0210 13:24:43.646892  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:43.647502  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:43.647530  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:43.647479  691524 retry.go:31] will retry after 372.747782ms: waiting for domain to come up
	I0210 13:24:44.022175  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:44.022720  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:44.022762  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:44.022643  691524 retry.go:31] will retry after 498.159478ms: waiting for domain to come up
	I0210 13:24:44.522570  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:44.523198  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:44.523228  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:44.523153  691524 retry.go:31] will retry after 604.957125ms: waiting for domain to come up
	I0210 13:24:45.129970  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:45.130451  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:45.130473  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:45.130420  691524 retry.go:31] will retry after 898.332464ms: waiting for domain to come up
	I0210 13:24:46.030650  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:46.031180  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:46.031209  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:46.031128  691524 retry.go:31] will retry after 1.265422975s: waiting for domain to come up
	I0210 13:24:44.271495  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:46.770352  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:46.773208  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:46.785471  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:46.785541  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:46.819010  688914 cri.go:89] found id: ""
	I0210 13:24:46.819043  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.819053  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:46.819061  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:46.819125  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:46.851361  688914 cri.go:89] found id: ""
	I0210 13:24:46.851395  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.851408  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:46.851416  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:46.851489  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:46.887040  688914 cri.go:89] found id: ""
	I0210 13:24:46.887074  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.887086  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:46.887094  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:46.887159  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:46.919719  688914 cri.go:89] found id: ""
	I0210 13:24:46.919752  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.919763  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:46.919780  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:46.919854  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:46.962383  688914 cri.go:89] found id: ""
	I0210 13:24:46.962416  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.962429  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:46.962438  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:46.962510  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:46.997529  688914 cri.go:89] found id: ""
	I0210 13:24:46.997558  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.997567  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:46.997573  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:46.997624  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:47.034666  688914 cri.go:89] found id: ""
	I0210 13:24:47.034698  688914 logs.go:282] 0 containers: []
	W0210 13:24:47.034709  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:47.034717  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:47.034772  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:47.072750  688914 cri.go:89] found id: ""
	I0210 13:24:47.072780  688914 logs.go:282] 0 containers: []
	W0210 13:24:47.072788  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:47.072799  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:47.072811  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:47.126909  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:47.126946  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:47.139755  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:47.139783  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:47.207327  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:47.207369  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:47.207395  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:47.296476  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:47.296530  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:49.839781  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:49.852562  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:49.852630  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:49.887112  688914 cri.go:89] found id: ""
	I0210 13:24:49.887146  688914 logs.go:282] 0 containers: []
	W0210 13:24:49.887160  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:49.887179  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:49.887245  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:49.920850  688914 cri.go:89] found id: ""
	I0210 13:24:49.920878  688914 logs.go:282] 0 containers: []
	W0210 13:24:49.920885  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:49.920891  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:49.920944  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:49.950969  688914 cri.go:89] found id: ""
	I0210 13:24:49.951002  688914 logs.go:282] 0 containers: []
	W0210 13:24:49.951010  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:49.951017  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:49.951074  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:49.985312  688914 cri.go:89] found id: ""
	I0210 13:24:49.985341  688914 logs.go:282] 0 containers: []
	W0210 13:24:49.985350  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:49.985357  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:49.985420  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:50.022609  688914 cri.go:89] found id: ""
	I0210 13:24:50.022643  688914 logs.go:282] 0 containers: []
	W0210 13:24:50.022654  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:50.022662  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:50.022741  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:50.060874  688914 cri.go:89] found id: ""
	I0210 13:24:50.060910  688914 logs.go:282] 0 containers: []
	W0210 13:24:50.060921  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:50.060928  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:50.060995  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:50.105868  688914 cri.go:89] found id: ""
	I0210 13:24:50.105904  688914 logs.go:282] 0 containers: []
	W0210 13:24:50.105916  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:50.105924  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:50.105987  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:47.297831  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:47.298426  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:47.298458  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:47.298379  691524 retry.go:31] will retry after 1.501368767s: waiting for domain to come up
	I0210 13:24:48.802064  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:48.802681  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:48.802713  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:48.802644  691524 retry.go:31] will retry after 1.952900788s: waiting for domain to come up
	I0210 13:24:50.757205  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:50.757657  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:50.757681  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:50.757634  691524 retry.go:31] will retry after 2.841299634s: waiting for domain to come up
	I0210 13:24:48.770842  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:50.771415  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:50.143929  688914 cri.go:89] found id: ""
	I0210 13:24:50.143961  688914 logs.go:282] 0 containers: []
	W0210 13:24:50.143980  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:50.143990  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:50.144006  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:50.205049  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:50.205092  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:50.224083  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:50.224118  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:50.291786  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:50.291812  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:50.291831  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:50.371326  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:50.371371  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:52.919235  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:52.937153  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:52.937253  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:52.969532  688914 cri.go:89] found id: ""
	I0210 13:24:52.969567  688914 logs.go:282] 0 containers: []
	W0210 13:24:52.969578  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:52.969586  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:52.969647  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:53.002238  688914 cri.go:89] found id: ""
	I0210 13:24:53.002269  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.002280  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:53.002287  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:53.002362  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:53.035346  688914 cri.go:89] found id: ""
	I0210 13:24:53.035376  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.035384  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:53.035392  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:53.035461  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:53.072805  688914 cri.go:89] found id: ""
	I0210 13:24:53.072897  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.072916  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:53.072926  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:53.073004  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:53.110660  688914 cri.go:89] found id: ""
	I0210 13:24:53.110691  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.110702  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:53.110712  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:53.110780  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:53.147192  688914 cri.go:89] found id: ""
	I0210 13:24:53.147222  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.147233  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:53.147242  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:53.147309  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:53.182225  688914 cri.go:89] found id: ""
	I0210 13:24:53.182260  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.182272  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:53.182280  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:53.182356  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:53.222558  688914 cri.go:89] found id: ""
	I0210 13:24:53.222590  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.222601  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:53.222614  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:53.222630  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:53.279358  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:53.279408  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:53.294748  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:53.294787  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:53.369719  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:53.369745  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:53.369762  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:53.451596  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:53.451639  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:53.601402  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:53.601912  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:53.601961  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:53.601883  691524 retry.go:31] will retry after 2.542274821s: waiting for domain to come up
	I0210 13:24:56.146274  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:56.146832  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:56.146863  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:56.146790  691524 retry.go:31] will retry after 3.125209956s: waiting for domain to come up
	I0210 13:24:52.779375  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:55.269617  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:57.271040  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:55.993228  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:56.005645  688914 kubeadm.go:597] duration metric: took 4m2.60696863s to restartPrimaryControlPlane
	W0210 13:24:56.005721  688914 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0210 13:24:56.005746  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 13:24:56.513498  688914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:24:56.526951  688914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 13:24:56.536360  688914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:24:56.544989  688914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:24:56.545005  688914 kubeadm.go:157] found existing configuration files:
	
	I0210 13:24:56.545053  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:24:56.553248  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:24:56.553299  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:24:56.562196  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:24:56.570708  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:24:56.570756  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:24:56.580086  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:24:56.588161  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:24:56.588207  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:24:56.596487  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:24:56.604340  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:24:56.604385  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:24:56.612499  688914 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 13:24:56.823209  688914 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 13:24:59.274113  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.274657  691489 main.go:141] libmachine: (newest-cni-078760) found domain IP: 192.168.39.250
	I0210 13:24:59.274689  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has current primary IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.274697  691489 main.go:141] libmachine: (newest-cni-078760) reserving static IP address...
	I0210 13:24:59.275163  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "newest-cni-078760", mac: "52:54:00:6b:a1:b8", ip: "192.168.39.250"} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.275200  691489 main.go:141] libmachine: (newest-cni-078760) DBG | skip adding static IP to network mk-newest-cni-078760 - found existing host DHCP lease matching {name: "newest-cni-078760", mac: "52:54:00:6b:a1:b8", ip: "192.168.39.250"}
	I0210 13:24:59.275212  691489 main.go:141] libmachine: (newest-cni-078760) reserved static IP address 192.168.39.250 for domain newest-cni-078760
	I0210 13:24:59.275224  691489 main.go:141] libmachine: (newest-cni-078760) waiting for SSH...
	I0210 13:24:59.275240  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Getting to WaitForSSH function...
	I0210 13:24:59.277564  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.277937  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.277972  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.278049  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Using SSH client type: external
	I0210 13:24:59.278098  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Using SSH private key: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa (-rw-------)
	I0210 13:24:59.278150  691489 main.go:141] libmachine: (newest-cni-078760) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 13:24:59.278164  691489 main.go:141] libmachine: (newest-cni-078760) DBG | About to run SSH command:
	I0210 13:24:59.278172  691489 main.go:141] libmachine: (newest-cni-078760) DBG | exit 0
	I0210 13:24:59.405034  691489 main.go:141] libmachine: (newest-cni-078760) DBG | SSH cmd err, output: <nil>: 
	I0210 13:24:59.405508  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetConfigRaw
	I0210 13:24:59.406149  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetIP
	I0210 13:24:59.408696  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.409061  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.409097  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.409422  691489 profile.go:143] Saving config to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/config.json ...
	I0210 13:24:59.409617  691489 machine.go:93] provisionDockerMachine start ...
	I0210 13:24:59.409635  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:24:59.409892  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:24:59.412202  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.412549  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.412570  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.412770  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:24:59.412949  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:24:59.413066  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:24:59.413229  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:24:59.413383  691489 main.go:141] libmachine: Using SSH client type: native
	I0210 13:24:59.413675  691489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0210 13:24:59.413693  691489 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 13:24:59.520985  691489 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 13:24:59.521014  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetMachineName
	I0210 13:24:59.521304  691489 buildroot.go:166] provisioning hostname "newest-cni-078760"
	I0210 13:24:59.521348  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetMachineName
	I0210 13:24:59.521546  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:24:59.524011  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.524395  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.524426  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.524511  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:24:59.524677  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:24:59.524830  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:24:59.524930  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:24:59.525090  691489 main.go:141] libmachine: Using SSH client type: native
	I0210 13:24:59.525301  691489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0210 13:24:59.525317  691489 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-078760 && echo "newest-cni-078760" | sudo tee /etc/hostname
	I0210 13:24:59.646397  691489 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-078760
	
	I0210 13:24:59.646428  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:24:59.649460  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.649855  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.649887  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.650122  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:24:59.650345  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:24:59.650510  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:24:59.650661  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:24:59.650865  691489 main.go:141] libmachine: Using SSH client type: native
	I0210 13:24:59.651057  691489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0210 13:24:59.651075  691489 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-078760' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-078760/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-078760' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 13:24:59.765308  691489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 13:24:59.765347  691489 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20383-625153/.minikube CaCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20383-625153/.minikube}
	I0210 13:24:59.765387  691489 buildroot.go:174] setting up certificates
	I0210 13:24:59.765401  691489 provision.go:84] configureAuth start
	I0210 13:24:59.765424  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetMachineName
	I0210 13:24:59.765729  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetIP
	I0210 13:24:59.768971  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.769366  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.769391  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.769640  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:24:59.772244  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.772630  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.772667  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.772825  691489 provision.go:143] copyHostCerts
	I0210 13:24:59.772893  691489 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem, removing ...
	I0210 13:24:59.772903  691489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem
	I0210 13:24:59.772968  691489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem (1082 bytes)
	I0210 13:24:59.773076  691489 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem, removing ...
	I0210 13:24:59.773084  691489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem
	I0210 13:24:59.773148  691489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem (1123 bytes)
	I0210 13:24:59.773228  691489 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem, removing ...
	I0210 13:24:59.773236  691489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem
	I0210 13:24:59.773260  691489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem (1675 bytes)
	I0210 13:24:59.773329  691489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem org=jenkins.newest-cni-078760 san=[127.0.0.1 192.168.39.250 localhost minikube newest-cni-078760]
	I0210 13:25:00.289725  691489 provision.go:177] copyRemoteCerts
	I0210 13:25:00.289790  691489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 13:25:00.289817  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:00.292758  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.293115  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:00.293149  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.293357  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:00.293603  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.293811  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:00.293957  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:00.383066  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0210 13:25:00.405672  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 13:25:00.428091  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 13:25:00.448809  691489 provision.go:87] duration metric: took 683.388073ms to configureAuth
	I0210 13:25:00.448837  691489 buildroot.go:189] setting minikube options for container-runtime
	I0210 13:25:00.449011  691489 config.go:182] Loaded profile config "newest-cni-078760": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:25:00.449092  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:00.451834  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.452228  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:00.452255  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.452441  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:00.452649  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.452807  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.452911  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:00.453073  691489 main.go:141] libmachine: Using SSH client type: native
	I0210 13:25:00.453278  691489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0210 13:25:00.453302  691489 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 13:25:00.672251  691489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 13:25:00.672293  691489 machine.go:96] duration metric: took 1.262661195s to provisionDockerMachine
	I0210 13:25:00.672311  691489 start.go:293] postStartSetup for "newest-cni-078760" (driver="kvm2")
	I0210 13:25:00.672325  691489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 13:25:00.672351  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:00.672711  691489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 13:25:00.672751  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:00.675260  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.675668  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:00.675700  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.675807  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:00.675998  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.676205  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:00.676346  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:00.758840  691489 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 13:25:00.762542  691489 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 13:25:00.762567  691489 filesync.go:126] Scanning /home/jenkins/minikube-integration/20383-625153/.minikube/addons for local assets ...
	I0210 13:25:00.762639  691489 filesync.go:126] Scanning /home/jenkins/minikube-integration/20383-625153/.minikube/files for local assets ...
	I0210 13:25:00.762734  691489 filesync.go:149] local asset: /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem -> 6323522.pem in /etc/ssl/certs
	I0210 13:25:00.762860  691489 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 13:25:00.773351  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem --> /etc/ssl/certs/6323522.pem (1708 bytes)
	I0210 13:25:00.796618  691489 start.go:296] duration metric: took 124.2886ms for postStartSetup
	I0210 13:25:00.796673  691489 fix.go:56] duration metric: took 19.429804907s for fixHost
	I0210 13:25:00.796697  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:00.799632  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.799962  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:00.799989  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.800218  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:00.800405  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.800535  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.800642  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:00.800769  691489 main.go:141] libmachine: Using SSH client type: native
	I0210 13:25:00.800931  691489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0210 13:25:00.800941  691489 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 13:25:00.909435  691489 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739193900.883827731
	
	I0210 13:25:00.909467  691489 fix.go:216] guest clock: 1739193900.883827731
	I0210 13:25:00.909475  691489 fix.go:229] Guest: 2025-02-10 13:25:00.883827731 +0000 UTC Remote: 2025-02-10 13:25:00.796678487 +0000 UTC m=+19.572875336 (delta=87.149244ms)
	I0210 13:25:00.909527  691489 fix.go:200] guest clock delta is within tolerance: 87.149244ms
	I0210 13:25:00.909539  691489 start.go:83] releasing machines lock for "newest-cni-078760", held for 19.542688037s
	I0210 13:25:00.909575  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:00.909866  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetIP
	I0210 13:25:00.912692  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.913180  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:00.913209  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.913393  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:00.913968  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:00.914173  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:00.914234  691489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 13:25:00.914286  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:00.914386  691489 ssh_runner.go:195] Run: cat /version.json
	I0210 13:25:00.914413  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:00.917197  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.917270  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.917549  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:00.917577  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.917603  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:00.917618  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.917755  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:00.917938  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:00.917969  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.918181  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:00.918186  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.918323  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:00.918506  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:00.918627  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:01.016816  691489 ssh_runner.go:195] Run: systemctl --version
	I0210 13:25:01.022398  691489 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 13:25:01.160711  691489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 13:25:01.166231  691489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 13:25:01.166308  691489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 13:25:01.181307  691489 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 13:25:01.181340  691489 start.go:495] detecting cgroup driver to use...
	I0210 13:25:01.181432  691489 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 13:25:01.196599  691489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 13:25:01.210368  691489 docker.go:217] disabling cri-docker service (if available) ...
	I0210 13:25:01.210447  691489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 13:25:01.224277  691489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 13:25:01.237050  691489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 13:25:01.363079  691489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 13:25:01.505721  691489 docker.go:233] disabling docker service ...
	I0210 13:25:01.505798  691489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 13:25:01.519404  691489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 13:25:01.531569  691489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 13:25:01.656701  691489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 13:25:01.761785  691489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 13:25:01.775504  691489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 13:25:01.793265  691489 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0210 13:25:01.793350  691489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:25:01.802631  691489 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 13:25:01.802704  691489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:25:01.811794  691489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:25:01.821081  691489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:25:01.830115  691489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 13:25:01.839351  691489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:25:01.848567  691489 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:25:01.864326  691489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:25:01.874772  691489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 13:25:01.884394  691489 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 13:25:01.884474  691489 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 13:25:01.897647  691489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 13:25:01.906297  691489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:25:02.014414  691489 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 13:25:02.104325  691489 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 13:25:02.104434  691489 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 13:25:02.108842  691489 start.go:563] Will wait 60s for crictl version
	I0210 13:25:02.108917  691489 ssh_runner.go:195] Run: which crictl
	I0210 13:25:02.112360  691489 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 13:25:02.153660  691489 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 13:25:02.153771  691489 ssh_runner.go:195] Run: crio --version
	I0210 13:25:02.180774  691489 ssh_runner.go:195] Run: crio --version
	I0210 13:25:02.212419  691489 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0210 13:25:02.213655  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetIP
	I0210 13:25:02.216337  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:02.216703  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:02.216731  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:02.217046  691489 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0210 13:25:02.221017  691489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:25:02.234095  691489 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0210 13:24:59.770976  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:02.273787  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:02.235371  691489 kubeadm.go:883] updating cluster {Name:newest-cni-078760 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-078760 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 13:25:02.235495  691489 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 13:25:02.235552  691489 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:25:02.269571  691489 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0210 13:25:02.269654  691489 ssh_runner.go:195] Run: which lz4
	I0210 13:25:02.273617  691489 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 13:25:02.277988  691489 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 13:25:02.278024  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0210 13:25:03.523616  691489 crio.go:462] duration metric: took 1.250045789s to copy over tarball
	I0210 13:25:03.523702  691489 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 13:25:05.658254  691489 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.134495502s)
	I0210 13:25:05.658291  691489 crio.go:469] duration metric: took 2.134641092s to extract the tarball
	I0210 13:25:05.658303  691489 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 13:25:05.695477  691489 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:25:05.735472  691489 crio.go:514] all images are preloaded for cri-o runtime.
	I0210 13:25:05.735496  691489 cache_images.go:84] Images are preloaded, skipping loading
	I0210 13:25:05.735505  691489 kubeadm.go:934] updating node { 192.168.39.250 8443 v1.32.1 crio true true} ...
	I0210 13:25:05.735610  691489 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-078760 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-078760 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 13:25:05.735681  691489 ssh_runner.go:195] Run: crio config
	I0210 13:25:05.785195  691489 cni.go:84] Creating CNI manager for ""
	I0210 13:25:05.785224  691489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:25:05.785234  691489 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0210 13:25:05.785263  691489 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.250 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-078760 NodeName:newest-cni-078760 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 13:25:05.785425  691489 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-078760"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.250"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.250"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 13:25:05.785511  691489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 13:25:05.794956  691489 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 13:25:05.795032  691489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 13:25:05.804169  691489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0210 13:25:05.819782  691489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 13:25:05.835103  691489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0210 13:25:05.851153  691489 ssh_runner.go:195] Run: grep 192.168.39.250	control-plane.minikube.internal$ /etc/hosts
	I0210 13:25:05.854677  691489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:25:05.865911  691489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:25:05.995134  691489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:25:06.017449  691489 certs.go:68] Setting up /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760 for IP: 192.168.39.250
	I0210 13:25:06.017475  691489 certs.go:194] generating shared ca certs ...
	I0210 13:25:06.017497  691489 certs.go:226] acquiring lock for ca certs: {Name:mkf2f72f82a6bd7e3c16bb224cd26b80c3c89e0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:25:06.017658  691489 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key
	I0210 13:25:06.017711  691489 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key
	I0210 13:25:06.017726  691489 certs.go:256] generating profile certs ...
	I0210 13:25:06.017814  691489 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/client.key
	I0210 13:25:06.017907  691489 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/apiserver.key.1c0773a6
	I0210 13:25:06.017962  691489 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/proxy-client.key
	I0210 13:25:06.018106  691489 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352.pem (1338 bytes)
	W0210 13:25:06.018145  691489 certs.go:480] ignoring /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352_empty.pem, impossibly tiny 0 bytes
	I0210 13:25:06.018160  691489 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem (1675 bytes)
	I0210 13:25:06.018194  691489 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem (1082 bytes)
	I0210 13:25:06.018255  691489 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem (1123 bytes)
	I0210 13:25:06.018301  691489 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem (1675 bytes)
	I0210 13:25:06.018360  691489 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem (1708 bytes)
	I0210 13:25:06.019219  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 13:25:06.049870  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 13:25:06.079056  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 13:25:06.111520  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 13:25:06.144808  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0210 13:25:06.170435  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 13:25:06.193477  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 13:25:06.216083  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0210 13:25:06.237420  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem --> /usr/share/ca-certificates/6323522.pem (1708 bytes)
	I0210 13:25:06.259080  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 13:25:04.771284  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:07.270419  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:06.281857  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352.pem --> /usr/share/ca-certificates/632352.pem (1338 bytes)
	I0210 13:25:06.303749  691489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 13:25:06.319343  691489 ssh_runner.go:195] Run: openssl version
	I0210 13:25:06.324961  691489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/632352.pem && ln -fs /usr/share/ca-certificates/632352.pem /etc/ssl/certs/632352.pem"
	I0210 13:25:06.334777  691489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/632352.pem
	I0210 13:25:06.338786  691489 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 12:13 /usr/share/ca-certificates/632352.pem
	I0210 13:25:06.338851  691489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/632352.pem
	I0210 13:25:06.344301  691489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/632352.pem /etc/ssl/certs/51391683.0"
	I0210 13:25:06.354153  691489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6323522.pem && ln -fs /usr/share/ca-certificates/6323522.pem /etc/ssl/certs/6323522.pem"
	I0210 13:25:06.363691  691489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6323522.pem
	I0210 13:25:06.367845  691489 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 12:13 /usr/share/ca-certificates/6323522.pem
	I0210 13:25:06.367903  691489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6323522.pem
	I0210 13:25:06.373065  691489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6323522.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 13:25:06.382808  691489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 13:25:06.392603  691489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:25:06.396500  691489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:25:06.396554  691489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:25:06.401622  691489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 13:25:06.411181  691489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 13:25:06.415359  691489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 13:25:06.420593  691489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 13:25:06.426061  691489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 13:25:06.431327  691489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 13:25:06.436533  691489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 13:25:06.441660  691489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 13:25:06.446816  691489 kubeadm.go:392] StartCluster: {Name:newest-cni-078760 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-078760 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mult
iNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:25:06.446895  691489 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 13:25:06.446930  691489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:25:06.483125  691489 cri.go:89] found id: ""
	I0210 13:25:06.483211  691489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 13:25:06.493195  691489 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0210 13:25:06.493227  691489 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0210 13:25:06.493279  691489 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0210 13:25:06.502619  691489 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0210 13:25:06.503337  691489 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-078760" does not appear in /home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:25:06.503714  691489 kubeconfig.go:62] /home/jenkins/minikube-integration/20383-625153/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-078760" cluster setting kubeconfig missing "newest-cni-078760" context setting]
	I0210 13:25:06.504205  691489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/kubeconfig: {Name:mke7ef1ff4ff1259856291979fdd0337df3a08b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:25:06.505630  691489 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0210 13:25:06.514911  691489 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.250
	I0210 13:25:06.514960  691489 kubeadm.go:1160] stopping kube-system containers ...
	I0210 13:25:06.514977  691489 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0210 13:25:06.515037  691489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:25:06.554131  691489 cri.go:89] found id: ""
	I0210 13:25:06.554214  691489 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0210 13:25:06.570574  691489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:25:06.579872  691489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:25:06.579894  691489 kubeadm.go:157] found existing configuration files:
	
	I0210 13:25:06.579940  691489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:25:06.588189  691489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:25:06.588248  691489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:25:06.596978  691489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:25:06.605371  691489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:25:06.605424  691489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:25:06.613792  691489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:25:06.621620  691489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:25:06.621676  691489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:25:06.629800  691489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:25:06.637455  691489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:25:06.637496  691489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:25:06.645304  691489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 13:25:06.653346  691489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:25:06.763579  691489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:25:07.851528  691489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.087906654s)
	I0210 13:25:07.851566  691489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:25:08.057073  691489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:25:08.142252  691489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:25:08.227881  691489 api_server.go:52] waiting for apiserver process to appear ...
	I0210 13:25:08.227987  691489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:25:08.728481  691489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:25:09.228059  691489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:25:09.728607  691489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:25:10.228860  691489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:25:10.310725  691489 api_server.go:72] duration metric: took 2.082844906s to wait for apiserver process to appear ...
	I0210 13:25:10.310754  691489 api_server.go:88] waiting for apiserver healthz status ...
	I0210 13:25:10.310775  691489 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0210 13:25:10.311265  691489 api_server.go:269] stopped: https://192.168.39.250:8443/healthz: Get "https://192.168.39.250:8443/healthz": dial tcp 192.168.39.250:8443: connect: connection refused
	I0210 13:25:10.810910  691489 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0210 13:25:09.289289  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:11.769486  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:12.947266  691489 api_server.go:279] https://192.168.39.250:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 13:25:12.947307  691489 api_server.go:103] status: https://192.168.39.250:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 13:25:12.947327  691489 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0210 13:25:12.971991  691489 api_server.go:279] https://192.168.39.250:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 13:25:12.972028  691489 api_server.go:103] status: https://192.168.39.250:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 13:25:13.311219  691489 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0210 13:25:13.322624  691489 api_server.go:279] https://192.168.39.250:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 13:25:13.322653  691489 api_server.go:103] status: https://192.168.39.250:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 13:25:13.811259  691489 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0210 13:25:13.817960  691489 api_server.go:279] https://192.168.39.250:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 13:25:13.817992  691489 api_server.go:103] status: https://192.168.39.250:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 13:25:14.311715  691489 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0210 13:25:14.319786  691489 api_server.go:279] https://192.168.39.250:8443/healthz returned 200:
	ok
	I0210 13:25:14.327973  691489 api_server.go:141] control plane version: v1.32.1
	I0210 13:25:14.328010  691489 api_server.go:131] duration metric: took 4.017247642s to wait for apiserver health ...
	I0210 13:25:14.328025  691489 cni.go:84] Creating CNI manager for ""
	I0210 13:25:14.328034  691489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:25:14.330184  691489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0210 13:25:14.331476  691489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0210 13:25:14.348249  691489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0210 13:25:14.366751  691489 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 13:25:14.371867  691489 system_pods.go:59] 8 kube-system pods found
	I0210 13:25:14.371912  691489 system_pods.go:61] "coredns-668d6bf9bc-6xmgm" [e079a121-a86a-40b1-ac42-e3c1d4a45d3e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0210 13:25:14.371924  691489 system_pods.go:61] "etcd-newest-cni-078760" [ab03adeb-629d-40cc-b5a7-612855165223] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0210 13:25:14.371934  691489 system_pods.go:61] "kube-apiserver-newest-cni-078760" [d6bb0517-d5ab-4839-8974-f7c6d58dad52] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0210 13:25:14.371943  691489 system_pods.go:61] "kube-controller-manager-newest-cni-078760" [960a3334-7167-4942-8f1c-5a03ea01e628] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0210 13:25:14.371947  691489 system_pods.go:61] "kube-proxy-kd8mx" [951cb4ab-6e99-4be5-87ee-9e9c8eb4c635] Running
	I0210 13:25:14.371958  691489 system_pods.go:61] "kube-scheduler-newest-cni-078760" [bb9270e8-85d5-460e-89b5-49f374c1775d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0210 13:25:14.371964  691489 system_pods.go:61] "metrics-server-f79f97bbb-m2m4m" [9505b23a-756e-405a-a279-9e5a64082f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 13:25:14.371973  691489 system_pods.go:61] "storage-provisioner" [027d0f58-173c-4c51-86c6-461f4393192c] Running
	I0210 13:25:14.371978  691489 system_pods.go:74] duration metric: took 5.204788ms to wait for pod list to return data ...
	I0210 13:25:14.371986  691489 node_conditions.go:102] verifying NodePressure condition ...
	I0210 13:25:14.376210  691489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 13:25:14.376236  691489 node_conditions.go:123] node cpu capacity is 2
	I0210 13:25:14.376248  691489 node_conditions.go:105] duration metric: took 4.255584ms to run NodePressure ...
	I0210 13:25:14.376267  691489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:25:14.658659  691489 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 13:25:14.673616  691489 ops.go:34] apiserver oom_adj: -16
	I0210 13:25:14.673643  691489 kubeadm.go:597] duration metric: took 8.180409154s to restartPrimaryControlPlane
	I0210 13:25:14.673654  691489 kubeadm.go:394] duration metric: took 8.226850795s to StartCluster
	I0210 13:25:14.673678  691489 settings.go:142] acquiring lock: {Name:mk4bd8331d641665e48ff1d1c4382f2e915609be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:25:14.673775  691489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:25:14.674826  691489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/kubeconfig: {Name:mke7ef1ff4ff1259856291979fdd0337df3a08b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:25:14.675121  691489 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 13:25:14.675203  691489 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 13:25:14.675305  691489 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-078760"
	I0210 13:25:14.675332  691489 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-078760"
	I0210 13:25:14.675330  691489 config.go:182] Loaded profile config "newest-cni-078760": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	W0210 13:25:14.675339  691489 addons.go:247] addon storage-provisioner should already be in state true
	I0210 13:25:14.675327  691489 addons.go:69] Setting default-storageclass=true in profile "newest-cni-078760"
	I0210 13:25:14.675356  691489 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-078760"
	I0210 13:25:14.675374  691489 host.go:66] Checking if "newest-cni-078760" exists ...
	I0210 13:25:14.675362  691489 addons.go:69] Setting dashboard=true in profile "newest-cni-078760"
	I0210 13:25:14.675406  691489 addons.go:238] Setting addon dashboard=true in "newest-cni-078760"
	I0210 13:25:14.675373  691489 addons.go:69] Setting metrics-server=true in profile "newest-cni-078760"
	W0210 13:25:14.675416  691489 addons.go:247] addon dashboard should already be in state true
	I0210 13:25:14.675439  691489 addons.go:238] Setting addon metrics-server=true in "newest-cni-078760"
	W0210 13:25:14.675452  691489 addons.go:247] addon metrics-server should already be in state true
	I0210 13:25:14.675456  691489 host.go:66] Checking if "newest-cni-078760" exists ...
	I0210 13:25:14.675501  691489 host.go:66] Checking if "newest-cni-078760" exists ...
	I0210 13:25:14.675825  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.675825  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.675865  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.675949  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.675956  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.675998  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.675994  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.676030  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.676626  691489 out.go:177] * Verifying Kubernetes components...
	I0210 13:25:14.677970  691489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:25:14.692819  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41133
	I0210 13:25:14.692863  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43933
	I0210 13:25:14.693307  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.693457  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.693889  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.693917  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.694044  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.694067  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.694275  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.694467  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.694675  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetState
	I0210 13:25:14.694875  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.694910  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.695631  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39397
	I0210 13:25:14.695666  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40755
	I0210 13:25:14.696018  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.696028  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.696521  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.696541  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.696669  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.696690  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.696922  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.697247  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.697481  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.697516  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.697803  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.697850  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.698182  691489 addons.go:238] Setting addon default-storageclass=true in "newest-cni-078760"
	W0210 13:25:14.698206  691489 addons.go:247] addon default-storageclass should already be in state true
	I0210 13:25:14.698236  691489 host.go:66] Checking if "newest-cni-078760" exists ...
	I0210 13:25:14.698612  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.698664  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.713772  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40313
	I0210 13:25:14.714442  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.715026  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.715052  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.715415  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46367
	I0210 13:25:14.715437  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.715597  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetState
	I0210 13:25:14.715945  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.716483  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.716511  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.716848  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.717071  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetState
	I0210 13:25:14.717863  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36917
	I0210 13:25:14.717964  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0210 13:25:14.718191  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:14.718430  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.718536  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.718898  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:14.718993  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.719014  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.719122  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.719136  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.719353  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.719538  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.719570  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetState
	I0210 13:25:14.720089  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.720146  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.720737  691489 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0210 13:25:14.720739  691489 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0210 13:25:14.721144  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:14.722697  691489 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:25:14.722765  691489 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0210 13:25:14.722799  691489 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0210 13:25:14.722826  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:14.724344  691489 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0210 13:25:14.724481  691489 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 13:25:14.724502  691489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 13:25:14.724523  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:14.725362  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0210 13:25:14.725382  691489 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0210 13:25:14.725403  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:14.726853  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.727274  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:14.727299  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.727826  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:14.728040  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:14.728183  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:14.728402  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:14.728481  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.728865  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:14.728895  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.728973  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.729181  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:14.729432  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:14.729516  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:14.729542  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.729579  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:14.729722  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:14.729807  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:14.729972  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:14.730124  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:14.730252  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:14.765255  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33493
	I0210 13:25:14.765791  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.766387  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.766420  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.766810  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.767031  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetState
	I0210 13:25:14.768796  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:14.769012  691489 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 13:25:14.769028  691489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 13:25:14.769046  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:14.772060  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.772513  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:14.772563  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.772688  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:14.772874  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:14.773046  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:14.773224  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:14.847727  691489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:25:14.870840  691489 api_server.go:52] waiting for apiserver process to appear ...
	I0210 13:25:14.870928  691489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:25:14.886084  691489 api_server.go:72] duration metric: took 210.925044ms to wait for apiserver process to appear ...
	I0210 13:25:14.886114  691489 api_server.go:88] waiting for apiserver healthz status ...
	I0210 13:25:14.886139  691489 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0210 13:25:14.890757  691489 api_server.go:279] https://192.168.39.250:8443/healthz returned 200:
	ok
	I0210 13:25:14.891635  691489 api_server.go:141] control plane version: v1.32.1
	I0210 13:25:14.891659  691489 api_server.go:131] duration metric: took 5.538021ms to wait for apiserver health ...
	I0210 13:25:14.891667  691489 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 13:25:14.894919  691489 system_pods.go:59] 8 kube-system pods found
	I0210 13:25:14.894946  691489 system_pods.go:61] "coredns-668d6bf9bc-6xmgm" [e079a121-a86a-40b1-ac42-e3c1d4a45d3e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0210 13:25:14.894957  691489 system_pods.go:61] "etcd-newest-cni-078760" [ab03adeb-629d-40cc-b5a7-612855165223] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0210 13:25:14.894978  691489 system_pods.go:61] "kube-apiserver-newest-cni-078760" [d6bb0517-d5ab-4839-8974-f7c6d58dad52] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0210 13:25:14.894993  691489 system_pods.go:61] "kube-controller-manager-newest-cni-078760" [960a3334-7167-4942-8f1c-5a03ea01e628] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0210 13:25:14.895003  691489 system_pods.go:61] "kube-proxy-kd8mx" [951cb4ab-6e99-4be5-87ee-9e9c8eb4c635] Running
	I0210 13:25:14.895012  691489 system_pods.go:61] "kube-scheduler-newest-cni-078760" [bb9270e8-85d5-460e-89b5-49f374c1775d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0210 13:25:14.895020  691489 system_pods.go:61] "metrics-server-f79f97bbb-m2m4m" [9505b23a-756e-405a-a279-9e5a64082f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 13:25:14.895031  691489 system_pods.go:61] "storage-provisioner" [027d0f58-173c-4c51-86c6-461f4393192c] Running
	I0210 13:25:14.895036  691489 system_pods.go:74] duration metric: took 3.36419ms to wait for pod list to return data ...
	I0210 13:25:14.895046  691489 default_sa.go:34] waiting for default service account to be created ...
	I0210 13:25:14.896970  691489 default_sa.go:45] found service account: "default"
	I0210 13:25:14.896991  691489 default_sa.go:55] duration metric: took 1.936863ms for default service account to be created ...
	I0210 13:25:14.897002  691489 kubeadm.go:582] duration metric: took 221.847464ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0210 13:25:14.897020  691489 node_conditions.go:102] verifying NodePressure condition ...
	I0210 13:25:14.898549  691489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 13:25:14.898572  691489 node_conditions.go:123] node cpu capacity is 2
	I0210 13:25:14.898582  691489 node_conditions.go:105] duration metric: took 1.55688ms to run NodePressure ...
	I0210 13:25:14.898599  691489 start.go:241] waiting for startup goroutines ...
	I0210 13:25:14.932116  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0210 13:25:14.932150  691489 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0210 13:25:14.934060  691489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 13:25:14.952546  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0210 13:25:14.952574  691489 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0210 13:25:15.029473  691489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 13:25:15.031105  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0210 13:25:15.031141  691489 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0210 13:25:15.056497  691489 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0210 13:25:15.056538  691489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0210 13:25:15.095190  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0210 13:25:15.095224  691489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0210 13:25:15.121346  691489 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0210 13:25:15.121374  691489 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0210 13:25:15.153148  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0210 13:25:15.153179  691489 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0210 13:25:15.216706  691489 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 13:25:15.216746  691489 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0210 13:25:15.241907  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0210 13:25:15.241943  691489 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0210 13:25:15.302673  691489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 13:25:15.365047  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0210 13:25:15.365100  691489 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0210 13:25:15.440460  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0210 13:25:15.440489  691489 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0210 13:25:15.518952  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 13:25:15.518987  691489 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0210 13:25:15.565860  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:15.565890  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:15.566253  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:15.566279  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:15.566278  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Closing plugin on server side
	I0210 13:25:15.566296  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:15.566308  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:15.566612  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:15.566656  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:15.576240  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:15.576264  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:15.576535  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:15.576557  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:15.576595  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Closing plugin on server side
	I0210 13:25:15.580109  691489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 13:25:16.740012  691489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.71049179s)
	I0210 13:25:16.740081  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:16.740093  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:16.740447  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:16.740469  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:16.740478  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:16.740487  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:16.740747  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:16.740797  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:16.740830  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Closing plugin on server side
	I0210 13:25:16.805424  691489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.502701364s)
	I0210 13:25:16.805480  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:16.805494  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:16.805796  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Closing plugin on server side
	I0210 13:25:16.805817  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:16.805851  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:16.805880  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:16.805893  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:16.806125  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:16.806141  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:16.806142  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Closing plugin on server side
	I0210 13:25:16.806153  691489 addons.go:479] Verifying addon metrics-server=true in "newest-cni-078760"
	I0210 13:25:17.452174  691489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.872018184s)
	I0210 13:25:17.452259  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:17.452280  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:17.452708  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:17.452733  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:17.452748  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:17.452742  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Closing plugin on server side
	I0210 13:25:17.452757  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:17.453057  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Closing plugin on server side
	I0210 13:25:17.453089  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:17.453098  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:17.455198  691489 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-078760 addons enable metrics-server
	
	I0210 13:25:17.456604  691489 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0210 13:25:17.458205  691489 addons.go:514] duration metric: took 2.782999976s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0210 13:25:17.458254  691489 start.go:246] waiting for cluster config update ...
	I0210 13:25:17.458273  691489 start.go:255] writing updated cluster config ...
	I0210 13:25:17.458614  691489 ssh_runner.go:195] Run: rm -f paused
	I0210 13:25:17.524434  691489 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0210 13:25:17.526201  691489 out.go:177] * Done! kubectl is now configured to use "newest-cni-078760" cluster and "default" namespace by default
	I0210 13:25:13.769744  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:15.770291  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:18.270374  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:20.270770  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:22.769900  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:24.770480  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:27.269398  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:29.270791  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:31.769785  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:34.269730  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:36.270751  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:38.770282  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:41.270569  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:43.769870  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:46.269860  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:48.269910  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:50.770287  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:53.270301  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:55.769898  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:57.770053  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:00.270852  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:02.769689  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:04.770190  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:06.770226  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:09.271157  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:11.770318  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:14.269317  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:16.270215  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:18.770402  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:21.269667  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:23.275443  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:25.770573  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:28.270716  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:30.271759  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:32.770603  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:35.269945  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:37.769930  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:39.783553  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:42.271101  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:44.774027  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:47.270211  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:49.771412  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:52.271199  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:52.767674  688914 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 13:26:52.767807  688914 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 13:26:52.769626  688914 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 13:26:52.769700  688914 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 13:26:52.769810  688914 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 13:26:52.769934  688914 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 13:26:52.770031  688914 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 13:26:52.770114  688914 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 13:26:52.771972  688914 out.go:235]   - Generating certificates and keys ...
	I0210 13:26:52.772065  688914 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 13:26:52.772157  688914 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 13:26:52.772272  688914 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 13:26:52.772338  688914 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 13:26:52.772402  688914 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 13:26:52.772464  688914 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 13:26:52.772523  688914 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 13:26:52.772581  688914 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 13:26:52.772660  688914 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 13:26:52.772734  688914 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 13:26:52.772770  688914 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 13:26:52.772822  688914 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 13:26:52.772867  688914 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 13:26:52.772917  688914 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 13:26:52.772974  688914 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 13:26:52.773022  688914 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 13:26:52.773151  688914 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 13:26:52.773258  688914 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 13:26:52.773305  688914 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 13:26:52.773386  688914 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 13:26:52.774698  688914 out.go:235]   - Booting up control plane ...
	I0210 13:26:52.774783  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 13:26:52.774853  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 13:26:52.774915  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 13:26:52.775002  688914 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 13:26:52.775179  688914 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 13:26:52.775244  688914 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 13:26:52.775340  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:26:52.775545  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:26:52.775613  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:26:52.775783  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:26:52.775841  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:26:52.776005  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:26:52.776090  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:26:52.776307  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:26:52.776424  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:26:52.776602  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:26:52.776616  688914 kubeadm.go:310] 
	I0210 13:26:52.776653  688914 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 13:26:52.776690  688914 kubeadm.go:310] 		timed out waiting for the condition
	I0210 13:26:52.776699  688914 kubeadm.go:310] 
	I0210 13:26:52.776733  688914 kubeadm.go:310] 	This error is likely caused by:
	I0210 13:26:52.776763  688914 kubeadm.go:310] 		- The kubelet is not running
	I0210 13:26:52.776850  688914 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 13:26:52.776856  688914 kubeadm.go:310] 
	I0210 13:26:52.776949  688914 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 13:26:52.776979  688914 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 13:26:52.777011  688914 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 13:26:52.777017  688914 kubeadm.go:310] 
	I0210 13:26:52.777134  688914 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 13:26:52.777239  688914 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 13:26:52.777252  688914 kubeadm.go:310] 
	I0210 13:26:52.777401  688914 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 13:26:52.777543  688914 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 13:26:52.777651  688914 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 13:26:52.777721  688914 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 13:26:52.777789  688914 kubeadm.go:310] 
	W0210 13:26:52.777852  688914 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0210 13:26:52.777903  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 13:26:54.770289  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:56.770506  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:58.074596  688914 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.296665584s)
	I0210 13:26:58.074683  688914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:26:58.091152  688914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:26:58.102648  688914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:26:58.102673  688914 kubeadm.go:157] found existing configuration files:
	
	I0210 13:26:58.102740  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:26:58.113654  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:26:58.113729  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:26:58.124863  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:26:58.135257  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:26:58.135321  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:26:58.145634  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:26:58.154591  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:26:58.154654  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:26:58.163835  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:26:58.172611  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:26:58.172679  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:26:58.182392  688914 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 13:26:58.251261  688914 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 13:26:58.251358  688914 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 13:26:58.383309  688914 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 13:26:58.383435  688914 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 13:26:58.383542  688914 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 13:26:58.550776  688914 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 13:26:58.552680  688914 out.go:235]   - Generating certificates and keys ...
	I0210 13:26:58.552793  688914 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 13:26:58.552881  688914 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 13:26:58.553007  688914 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 13:26:58.553091  688914 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 13:26:58.553226  688914 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 13:26:58.553329  688914 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 13:26:58.553420  688914 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 13:26:58.553525  688914 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 13:26:58.553642  688914 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 13:26:58.553774  688914 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 13:26:58.553837  688914 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 13:26:58.553918  688914 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 13:26:58.654826  688914 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 13:26:58.871525  688914 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 13:26:59.121959  688914 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 13:26:59.254004  688914 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 13:26:59.268822  688914 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 13:26:59.269202  688914 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 13:26:59.269279  688914 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 13:26:59.410011  688914 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 13:26:59.412184  688914 out.go:235]   - Booting up control plane ...
	I0210 13:26:59.412320  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 13:26:59.425128  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 13:26:59.426554  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 13:26:59.427605  688914 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 13:26:59.433353  688914 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 13:26:59.270125  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:01.270335  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:03.770196  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:06.270103  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:08.770078  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:11.269430  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:13.770250  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:16.269952  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:18.270261  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:20.270697  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:22.768944  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:24.770265  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:27.269151  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:29.270121  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:31.271007  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:33.769366  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:35.769901  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:39.435230  688914 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 13:27:39.435410  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:27:39.435648  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:27:38.270194  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:40.770209  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:44.436555  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:27:44.436828  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:27:42.770480  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:45.270561  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:47.770652  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:49.770343  689817 pod_ready.go:82] duration metric: took 4m0.005913971s for pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace to be "Ready" ...
	E0210 13:27:49.770375  689817 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0210 13:27:49.770383  689817 pod_ready.go:39] duration metric: took 4m9.41326084s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 13:27:49.770402  689817 api_server.go:52] waiting for apiserver process to appear ...
	I0210 13:27:49.770454  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:27:49.770518  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:27:49.817157  689817 cri.go:89] found id: "d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a"
	I0210 13:27:49.817183  689817 cri.go:89] found id: ""
	I0210 13:27:49.817192  689817 logs.go:282] 1 containers: [d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a]
	I0210 13:27:49.817252  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:49.821670  689817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:27:49.821737  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:27:49.857058  689817 cri.go:89] found id: "92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9"
	I0210 13:27:49.857087  689817 cri.go:89] found id: ""
	I0210 13:27:49.857096  689817 logs.go:282] 1 containers: [92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9]
	I0210 13:27:49.857182  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:49.861432  689817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:27:49.861505  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:27:49.897872  689817 cri.go:89] found id: "c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844"
	I0210 13:27:49.897903  689817 cri.go:89] found id: ""
	I0210 13:27:49.897914  689817 logs.go:282] 1 containers: [c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844]
	I0210 13:27:49.897982  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:49.902266  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:27:49.902339  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:27:49.944231  689817 cri.go:89] found id: "e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31"
	I0210 13:27:49.944261  689817 cri.go:89] found id: ""
	I0210 13:27:49.944272  689817 logs.go:282] 1 containers: [e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31]
	I0210 13:27:49.944336  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:49.948503  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:27:49.948579  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:27:49.990016  689817 cri.go:89] found id: "e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225"
	I0210 13:27:49.990040  689817 cri.go:89] found id: ""
	I0210 13:27:49.990048  689817 logs.go:282] 1 containers: [e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225]
	I0210 13:27:49.990106  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:49.994001  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:27:49.994060  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:27:50.027512  689817 cri.go:89] found id: "ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa"
	I0210 13:27:50.027538  689817 cri.go:89] found id: ""
	I0210 13:27:50.027549  689817 logs.go:282] 1 containers: [ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa]
	I0210 13:27:50.027614  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:50.031763  689817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:27:50.031823  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:27:50.066416  689817 cri.go:89] found id: ""
	I0210 13:27:50.066448  689817 logs.go:282] 0 containers: []
	W0210 13:27:50.066459  689817 logs.go:284] No container was found matching "kindnet"
	I0210 13:27:50.066467  689817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:27:50.066535  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:27:50.101054  689817 cri.go:89] found id: "bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0"
	I0210 13:27:50.101076  689817 cri.go:89] found id: ""
	I0210 13:27:50.101084  689817 logs.go:282] 1 containers: [bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0]
	I0210 13:27:50.101151  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:50.104987  689817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0210 13:27:50.105056  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 13:27:50.142580  689817 cri.go:89] found id: "e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863"
	I0210 13:27:50.142608  689817 cri.go:89] found id: "9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a"
	I0210 13:27:50.142614  689817 cri.go:89] found id: ""
	I0210 13:27:50.142624  689817 logs.go:282] 2 containers: [e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863 9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a]
	I0210 13:27:50.142692  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:50.146540  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:50.150056  689817 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:27:50.150079  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 13:27:50.311229  689817 logs.go:123] Gathering logs for kube-apiserver [d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a] ...
	I0210 13:27:50.311279  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a"
	I0210 13:27:50.366011  689817 logs.go:123] Gathering logs for etcd [92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9] ...
	I0210 13:27:50.366046  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9"
	I0210 13:27:50.412490  689817 logs.go:123] Gathering logs for kube-controller-manager [ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa] ...
	I0210 13:27:50.412523  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa"
	I0210 13:27:50.476890  689817 logs.go:123] Gathering logs for kubelet ...
	I0210 13:27:50.476940  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:27:50.571913  689817 logs.go:123] Gathering logs for coredns [c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844] ...
	I0210 13:27:50.571960  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844"
	I0210 13:27:50.606241  689817 logs.go:123] Gathering logs for kube-proxy [e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225] ...
	I0210 13:27:50.606284  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225"
	I0210 13:27:50.640859  689817 logs.go:123] Gathering logs for storage-provisioner [e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863] ...
	I0210 13:27:50.640895  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863"
	I0210 13:27:50.675943  689817 logs.go:123] Gathering logs for storage-provisioner [9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a] ...
	I0210 13:27:50.675979  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a"
	I0210 13:27:50.708397  689817 logs.go:123] Gathering logs for container status ...
	I0210 13:27:50.708447  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:27:50.759969  689817 logs.go:123] Gathering logs for dmesg ...
	I0210 13:27:50.760002  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:27:50.773795  689817 logs.go:123] Gathering logs for kube-scheduler [e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31] ...
	I0210 13:27:50.773827  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31"
	I0210 13:27:50.808393  689817 logs.go:123] Gathering logs for kubernetes-dashboard [bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0] ...
	I0210 13:27:50.808426  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0"
	I0210 13:27:50.841955  689817 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:27:50.841988  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:27:54.437160  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:27:54.437400  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:27:53.852846  689817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:27:53.869585  689817 api_server.go:72] duration metric: took 4m20.830334356s to wait for apiserver process to appear ...
	I0210 13:27:53.869618  689817 api_server.go:88] waiting for apiserver healthz status ...
	I0210 13:27:53.869665  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:27:53.869721  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:27:53.907655  689817 cri.go:89] found id: "d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a"
	I0210 13:27:53.907686  689817 cri.go:89] found id: ""
	I0210 13:27:53.907695  689817 logs.go:282] 1 containers: [d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a]
	I0210 13:27:53.907758  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:53.911810  689817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:27:53.911893  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:27:53.952378  689817 cri.go:89] found id: "92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9"
	I0210 13:27:53.952414  689817 cri.go:89] found id: ""
	I0210 13:27:53.952424  689817 logs.go:282] 1 containers: [92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9]
	I0210 13:27:53.952481  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:53.956365  689817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:27:53.956441  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:27:53.991382  689817 cri.go:89] found id: "c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844"
	I0210 13:27:53.991419  689817 cri.go:89] found id: ""
	I0210 13:27:53.991428  689817 logs.go:282] 1 containers: [c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844]
	I0210 13:27:53.991485  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:53.995300  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:27:53.995386  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:27:54.029032  689817 cri.go:89] found id: "e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31"
	I0210 13:27:54.029061  689817 cri.go:89] found id: ""
	I0210 13:27:54.029071  689817 logs.go:282] 1 containers: [e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31]
	I0210 13:27:54.029148  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:54.032926  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:27:54.032978  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:27:54.070279  689817 cri.go:89] found id: "e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225"
	I0210 13:27:54.070310  689817 cri.go:89] found id: ""
	I0210 13:27:54.070321  689817 logs.go:282] 1 containers: [e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225]
	I0210 13:27:54.070380  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:54.074168  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:27:54.074254  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:27:54.108632  689817 cri.go:89] found id: "ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa"
	I0210 13:27:54.108665  689817 cri.go:89] found id: ""
	I0210 13:27:54.108676  689817 logs.go:282] 1 containers: [ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa]
	I0210 13:27:54.108752  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:54.112693  689817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:27:54.112777  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:27:54.147138  689817 cri.go:89] found id: ""
	I0210 13:27:54.147170  689817 logs.go:282] 0 containers: []
	W0210 13:27:54.147178  689817 logs.go:284] No container was found matching "kindnet"
	I0210 13:27:54.147185  689817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:27:54.147247  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:27:54.183531  689817 cri.go:89] found id: "bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0"
	I0210 13:27:54.183555  689817 cri.go:89] found id: ""
	I0210 13:27:54.183563  689817 logs.go:282] 1 containers: [bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0]
	I0210 13:27:54.183620  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:54.187900  689817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0210 13:27:54.187970  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 13:27:54.224779  689817 cri.go:89] found id: "e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863"
	I0210 13:27:54.224803  689817 cri.go:89] found id: "9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a"
	I0210 13:27:54.224807  689817 cri.go:89] found id: ""
	I0210 13:27:54.224815  689817 logs.go:282] 2 containers: [e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863 9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a]
	I0210 13:27:54.224870  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:54.229251  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:54.232955  689817 logs.go:123] Gathering logs for coredns [c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844] ...
	I0210 13:27:54.232973  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844"
	I0210 13:27:54.266570  689817 logs.go:123] Gathering logs for kube-controller-manager [ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa] ...
	I0210 13:27:54.266604  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa"
	I0210 13:27:54.343214  689817 logs.go:123] Gathering logs for storage-provisioner [e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863] ...
	I0210 13:27:54.343252  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863"
	I0210 13:27:54.376776  689817 logs.go:123] Gathering logs for kubernetes-dashboard [bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0] ...
	I0210 13:27:54.376808  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0"
	I0210 13:27:54.410609  689817 logs.go:123] Gathering logs for storage-provisioner [9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a] ...
	I0210 13:27:54.410639  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a"
	I0210 13:27:54.443452  689817 logs.go:123] Gathering logs for kubelet ...
	I0210 13:27:54.443478  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:27:54.527929  689817 logs.go:123] Gathering logs for dmesg ...
	I0210 13:27:54.527979  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:27:54.542227  689817 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:27:54.542268  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 13:27:54.641377  689817 logs.go:123] Gathering logs for etcd [92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9] ...
	I0210 13:27:54.641418  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9"
	I0210 13:27:54.688223  689817 logs.go:123] Gathering logs for kube-proxy [e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225] ...
	I0210 13:27:54.688271  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225"
	I0210 13:27:54.725502  689817 logs.go:123] Gathering logs for kube-apiserver [d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a] ...
	I0210 13:27:54.725539  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a"
	I0210 13:27:54.765130  689817 logs.go:123] Gathering logs for kube-scheduler [e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31] ...
	I0210 13:27:54.765167  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31"
	I0210 13:27:54.800179  689817 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:27:54.800207  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:27:55.252259  689817 logs.go:123] Gathering logs for container status ...
	I0210 13:27:55.252300  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:27:57.789687  689817 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8444/healthz ...
	I0210 13:27:57.794618  689817 api_server.go:279] https://192.168.50.61:8444/healthz returned 200:
	ok
	I0210 13:27:57.795699  689817 api_server.go:141] control plane version: v1.32.1
	I0210 13:27:57.795724  689817 api_server.go:131] duration metric: took 3.926098165s to wait for apiserver health ...
	I0210 13:27:57.795735  689817 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 13:27:57.795772  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:27:57.795820  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:27:57.829148  689817 cri.go:89] found id: "d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a"
	I0210 13:27:57.829179  689817 cri.go:89] found id: ""
	I0210 13:27:57.829190  689817 logs.go:282] 1 containers: [d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a]
	I0210 13:27:57.829265  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:57.833209  689817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:27:57.833272  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:27:57.865761  689817 cri.go:89] found id: "92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9"
	I0210 13:27:57.865789  689817 cri.go:89] found id: ""
	I0210 13:27:57.865799  689817 logs.go:282] 1 containers: [92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9]
	I0210 13:27:57.865866  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:57.869409  689817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:27:57.869480  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:27:57.905847  689817 cri.go:89] found id: "c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844"
	I0210 13:27:57.905875  689817 cri.go:89] found id: ""
	I0210 13:27:57.905886  689817 logs.go:282] 1 containers: [c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844]
	I0210 13:27:57.905956  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:57.911821  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:27:57.911896  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:27:57.950779  689817 cri.go:89] found id: "e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31"
	I0210 13:27:57.950803  689817 cri.go:89] found id: ""
	I0210 13:27:57.950810  689817 logs.go:282] 1 containers: [e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31]
	I0210 13:27:57.950880  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:57.954573  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:27:57.954651  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:27:57.991678  689817 cri.go:89] found id: "e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225"
	I0210 13:27:57.991705  689817 cri.go:89] found id: ""
	I0210 13:27:57.991717  689817 logs.go:282] 1 containers: [e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225]
	I0210 13:27:57.991772  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:57.995971  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:27:57.996063  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:27:58.029073  689817 cri.go:89] found id: "ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa"
	I0210 13:27:58.029098  689817 cri.go:89] found id: ""
	I0210 13:27:58.029144  689817 logs.go:282] 1 containers: [ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa]
	I0210 13:27:58.029212  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:58.034012  689817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:27:58.034073  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:27:58.071316  689817 cri.go:89] found id: ""
	I0210 13:27:58.071346  689817 logs.go:282] 0 containers: []
	W0210 13:27:58.071358  689817 logs.go:284] No container was found matching "kindnet"
	I0210 13:27:58.071367  689817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:27:58.071438  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:27:58.105280  689817 cri.go:89] found id: "bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0"
	I0210 13:27:58.105308  689817 cri.go:89] found id: ""
	I0210 13:27:58.105319  689817 logs.go:282] 1 containers: [bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0]
	I0210 13:27:58.105390  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:58.109074  689817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0210 13:27:58.109169  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 13:27:58.141391  689817 cri.go:89] found id: "e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863"
	I0210 13:27:58.141415  689817 cri.go:89] found id: "9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a"
	I0210 13:27:58.141422  689817 cri.go:89] found id: ""
	I0210 13:27:58.141432  689817 logs.go:282] 2 containers: [e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863 9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a]
	I0210 13:27:58.141490  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:58.144977  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:58.148249  689817 logs.go:123] Gathering logs for kube-controller-manager [ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa] ...
	I0210 13:27:58.148272  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa"
	I0210 13:27:58.201328  689817 logs.go:123] Gathering logs for kubelet ...
	I0210 13:27:58.201360  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:27:58.296953  689817 logs.go:123] Gathering logs for dmesg ...
	I0210 13:27:58.297010  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:27:58.311276  689817 logs.go:123] Gathering logs for etcd [92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9] ...
	I0210 13:27:58.311312  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9"
	I0210 13:27:58.361415  689817 logs.go:123] Gathering logs for coredns [c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844] ...
	I0210 13:27:58.361452  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844"
	I0210 13:27:58.396072  689817 logs.go:123] Gathering logs for kube-apiserver [d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a] ...
	I0210 13:27:58.396109  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a"
	I0210 13:27:58.448027  689817 logs.go:123] Gathering logs for kube-scheduler [e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31] ...
	I0210 13:27:58.448064  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31"
	I0210 13:27:58.481535  689817 logs.go:123] Gathering logs for kube-proxy [e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225] ...
	I0210 13:27:58.481573  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225"
	I0210 13:27:58.514411  689817 logs.go:123] Gathering logs for kubernetes-dashboard [bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0] ...
	I0210 13:27:58.514445  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0"
	I0210 13:27:58.549570  689817 logs.go:123] Gathering logs for storage-provisioner [9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a] ...
	I0210 13:27:58.549603  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a"
	I0210 13:27:58.592297  689817 logs.go:123] Gathering logs for container status ...
	I0210 13:27:58.592330  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:27:58.631626  689817 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:27:58.631667  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 13:27:58.727480  689817 logs.go:123] Gathering logs for storage-provisioner [e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863] ...
	I0210 13:27:58.727519  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863"
	I0210 13:27:58.760031  689817 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:27:58.760069  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:28:01.664367  689817 system_pods.go:59] 8 kube-system pods found
	I0210 13:28:01.664422  689817 system_pods.go:61] "coredns-668d6bf9bc-fj2zq" [583359d8-8ada-4747-8682-6176db3f798a] Running
	I0210 13:28:01.664431  689817 system_pods.go:61] "etcd-default-k8s-diff-port-957542" [15bd93be-c696-42f6-9406-abe5d824a9d0] Running
	I0210 13:28:01.664436  689817 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-957542" [475365bf-2504-46d7-a068-5f5e3a9c773e] Running
	I0210 13:28:01.664442  689817 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-957542" [21fcb133-d0ed-4608-8d25-3719f15d0aaa] Running
	I0210 13:28:01.664446  689817 system_pods.go:61] "kube-proxy-8th94" [1e1a48fd-55a5-48e4-84dc-638f9d650e12] Running
	I0210 13:28:01.664451  689817 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-957542" [1bbe3544-9217-4b50-9903-8b0edf49f996] Running
	I0210 13:28:01.664459  689817 system_pods.go:61] "metrics-server-f79f97bbb-sg6xj" [4fd14781-7917-44e7-8358-2ae86a7bac81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 13:28:01.664465  689817 system_pods.go:61] "storage-provisioner" [30e8603f-89cf-4919-9bf4-bcece8c32934] Running
	I0210 13:28:01.664478  689817 system_pods.go:74] duration metric: took 3.868731638s to wait for pod list to return data ...
	I0210 13:28:01.664492  689817 default_sa.go:34] waiting for default service account to be created ...
	I0210 13:28:01.666845  689817 default_sa.go:45] found service account: "default"
	I0210 13:28:01.666865  689817 default_sa.go:55] duration metric: took 2.365764ms for default service account to be created ...
	I0210 13:28:01.666874  689817 system_pods.go:116] waiting for k8s-apps to be running ...
	I0210 13:28:01.669411  689817 system_pods.go:86] 8 kube-system pods found
	I0210 13:28:01.669440  689817 system_pods.go:89] "coredns-668d6bf9bc-fj2zq" [583359d8-8ada-4747-8682-6176db3f798a] Running
	I0210 13:28:01.669446  689817 system_pods.go:89] "etcd-default-k8s-diff-port-957542" [15bd93be-c696-42f6-9406-abe5d824a9d0] Running
	I0210 13:28:01.669451  689817 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-957542" [475365bf-2504-46d7-a068-5f5e3a9c773e] Running
	I0210 13:28:01.669455  689817 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-957542" [21fcb133-d0ed-4608-8d25-3719f15d0aaa] Running
	I0210 13:28:01.669459  689817 system_pods.go:89] "kube-proxy-8th94" [1e1a48fd-55a5-48e4-84dc-638f9d650e12] Running
	I0210 13:28:01.669463  689817 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-957542" [1bbe3544-9217-4b50-9903-8b0edf49f996] Running
	I0210 13:28:01.669469  689817 system_pods.go:89] "metrics-server-f79f97bbb-sg6xj" [4fd14781-7917-44e7-8358-2ae86a7bac81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 13:28:01.669474  689817 system_pods.go:89] "storage-provisioner" [30e8603f-89cf-4919-9bf4-bcece8c32934] Running
	I0210 13:28:01.669482  689817 system_pods.go:126] duration metric: took 2.601853ms to wait for k8s-apps to be running ...
	I0210 13:28:01.669489  689817 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 13:28:01.669552  689817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:28:01.684641  689817 system_svc.go:56] duration metric: took 15.145438ms WaitForService to wait for kubelet
	I0210 13:28:01.684677  689817 kubeadm.go:582] duration metric: took 4m28.645432042s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 13:28:01.684724  689817 node_conditions.go:102] verifying NodePressure condition ...
	I0210 13:28:01.687051  689817 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 13:28:01.687081  689817 node_conditions.go:123] node cpu capacity is 2
	I0210 13:28:01.687115  689817 node_conditions.go:105] duration metric: took 2.383739ms to run NodePressure ...
	I0210 13:28:01.687149  689817 start.go:241] waiting for startup goroutines ...
	I0210 13:28:01.687161  689817 start.go:246] waiting for cluster config update ...
	I0210 13:28:01.687172  689817 start.go:255] writing updated cluster config ...
	I0210 13:28:01.687476  689817 ssh_runner.go:195] Run: rm -f paused
	I0210 13:28:01.739316  689817 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0210 13:28:01.741286  689817 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-957542" cluster and "default" namespace by default
	I0210 13:28:14.437678  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:28:14.437931  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:28:54.436979  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:28:54.437271  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:28:54.437281  688914 kubeadm.go:310] 
	I0210 13:28:54.437319  688914 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 13:28:54.437355  688914 kubeadm.go:310] 		timed out waiting for the condition
	I0210 13:28:54.437361  688914 kubeadm.go:310] 
	I0210 13:28:54.437390  688914 kubeadm.go:310] 	This error is likely caused by:
	I0210 13:28:54.437468  688914 kubeadm.go:310] 		- The kubelet is not running
	I0210 13:28:54.437614  688914 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 13:28:54.437628  688914 kubeadm.go:310] 
	I0210 13:28:54.437762  688914 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 13:28:54.437806  688914 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 13:28:54.437850  688914 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 13:28:54.437863  688914 kubeadm.go:310] 
	I0210 13:28:54.437986  688914 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 13:28:54.438064  688914 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 13:28:54.438084  688914 kubeadm.go:310] 
	I0210 13:28:54.438245  688914 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 13:28:54.438388  688914 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 13:28:54.438510  688914 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 13:28:54.438608  688914 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 13:28:54.438622  688914 kubeadm.go:310] 
	I0210 13:28:54.439017  688914 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 13:28:54.439094  688914 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 13:28:54.439183  688914 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 13:28:54.439220  688914 kubeadm.go:394] duration metric: took 8m1.096783715s to StartCluster
	I0210 13:28:54.439356  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:28:54.439446  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:28:54.481711  688914 cri.go:89] found id: ""
	I0210 13:28:54.481745  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.481753  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:28:54.481759  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:28:54.481826  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:28:54.515485  688914 cri.go:89] found id: ""
	I0210 13:28:54.515513  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.515521  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:28:54.515528  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:28:54.515585  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:28:54.565719  688914 cri.go:89] found id: ""
	I0210 13:28:54.565767  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.565779  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:28:54.565788  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:28:54.565864  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:28:54.597764  688914 cri.go:89] found id: ""
	I0210 13:28:54.597806  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.597814  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:28:54.597821  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:28:54.597888  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:28:54.631935  688914 cri.go:89] found id: ""
	I0210 13:28:54.631965  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.631975  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:28:54.631982  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:28:54.632052  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:28:54.664095  688914 cri.go:89] found id: ""
	I0210 13:28:54.664135  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.664147  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:28:54.664154  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:28:54.664213  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:28:54.695397  688914 cri.go:89] found id: ""
	I0210 13:28:54.695433  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.695445  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:28:54.695454  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:28:54.695522  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:28:54.732080  688914 cri.go:89] found id: ""
	I0210 13:28:54.732115  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.732127  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:28:54.732150  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:28:54.732163  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:28:54.838309  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:28:54.838352  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:28:54.876415  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:28:54.876444  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:28:54.925312  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:28:54.925353  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:28:54.938075  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:28:54.938108  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:28:55.007575  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0210 13:28:55.007606  688914 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0210 13:28:55.007664  688914 out.go:270] * 
	W0210 13:28:55.007737  688914 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 13:28:55.007760  688914 out.go:270] * 
	W0210 13:28:55.008646  688914 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 13:28:55.012559  688914 out.go:201] 
	W0210 13:28:55.013936  688914 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 13:28:55.013983  688914 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0210 13:28:55.014019  688914 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0210 13:28:55.015512  688914 out.go:201] 
	
	
	==> CRI-O <==
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.035093335Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739194136035067949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78d676c6-2f42-4480-92d5-833a5ea0d4e9 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.035650451Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad5f237c-aaa0-4edb-a3f0-99ef91b217ea name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.035730513Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad5f237c-aaa0-4edb-a3f0-99ef91b217ea name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.035772418Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ad5f237c-aaa0-4edb-a3f0-99ef91b217ea name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.075775161Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=058f4b11-9b87-4cb4-b5d6-a3f7d81adb8d name=/runtime.v1.RuntimeService/Version
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.075882172Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=058f4b11-9b87-4cb4-b5d6-a3f7d81adb8d name=/runtime.v1.RuntimeService/Version
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.077542316Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ea5d4d1-cd06-4529-b4b0-09e50f389bb0 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.078058933Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739194136078037223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ea5d4d1-cd06-4529-b4b0-09e50f389bb0 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.079049499Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0aee17e6-7d89-4289-8a5d-cfd4693d13ed name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.079116154Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0aee17e6-7d89-4289-8a5d-cfd4693d13ed name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.079164843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0aee17e6-7d89-4289-8a5d-cfd4693d13ed name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.113942440Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=599641a1-306b-4f9d-bf3c-f78aec650b4e name=/runtime.v1.RuntimeService/Version
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.114043693Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=599641a1-306b-4f9d-bf3c-f78aec650b4e name=/runtime.v1.RuntimeService/Version
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.117178456Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9fe3b16-3ac3-4df5-bfd5-fd656225e035 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.117534652Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739194136117515675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9fe3b16-3ac3-4df5-bfd5-fd656225e035 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.118219548Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e257ebf5-10f8-4e75-8162-6d5ad5ace6f7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.118290600Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e257ebf5-10f8-4e75-8162-6d5ad5ace6f7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.118343956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e257ebf5-10f8-4e75-8162-6d5ad5ace6f7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.152402285Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=395c8704-0f10-4faa-906a-c62f3af8edd2 name=/runtime.v1.RuntimeService/Version
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.152476672Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=395c8704-0f10-4faa-906a-c62f3af8edd2 name=/runtime.v1.RuntimeService/Version
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.153504330Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98f5e634-12b7-4ff5-a211-b1a80da4d24f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.153916698Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739194136153896124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98f5e634-12b7-4ff5-a211-b1a80da4d24f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.154369977Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=265dd411-4800-4dde-a52a-62ffe92b106b name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.154435618Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=265dd411-4800-4dde-a52a-62ffe92b106b name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:28:56 old-k8s-version-745712 crio[634]: time="2025-02-10 13:28:56.154468100Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=265dd411-4800-4dde-a52a-62ffe92b106b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb10 13:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057667] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039973] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.114070] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.167757] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.632628] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.042916] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.063154] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064261] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.151765] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.139010] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.215149] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +6.104183] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.063040] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.778959] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[Feb10 13:21] kauditd_printk_skb: 46 callbacks suppressed
	[Feb10 13:24] systemd-fstab-generator[5077]: Ignoring "noauto" option for root device
	[Feb10 13:26] systemd-fstab-generator[5356]: Ignoring "noauto" option for root device
	[  +0.069372] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:28:56 up 8 min,  0 users,  load average: 0.02, 0.11, 0.06
	Linux old-k8s-version-745712 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 10 13:28:54 old-k8s-version-745712 kubelet[5534]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Feb 10 13:28:54 old-k8s-version-745712 kubelet[5534]: net.(*sysDialer).dialSerial(0xc000b1d280, 0x4f7fe40, 0xc000196540, 0xc000ba6d00, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Feb 10 13:28:54 old-k8s-version-745712 kubelet[5534]:         /usr/local/go/src/net/dial.go:548 +0x152
	Feb 10 13:28:54 old-k8s-version-745712 kubelet[5534]: net.(*Dialer).DialContext(0xc000b80060, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000d2a030, 0x24, 0x0, 0x0, 0x0, ...)
	Feb 10 13:28:54 old-k8s-version-745712 kubelet[5534]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Feb 10 13:28:54 old-k8s-version-745712 kubelet[5534]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000b9c820, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000d2a030, 0x24, 0x60, 0x7fcba83d1a20, 0x118, ...)
	Feb 10 13:28:54 old-k8s-version-745712 kubelet[5534]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Feb 10 13:28:54 old-k8s-version-745712 kubelet[5534]: net/http.(*Transport).dial(0xc0008ee140, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000d2a030, 0x24, 0x0, 0x0, 0x0, ...)
	Feb 10 13:28:54 old-k8s-version-745712 kubelet[5534]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Feb 10 13:28:54 old-k8s-version-745712 kubelet[5534]: net/http.(*Transport).dialConn(0xc0008ee140, 0x4f7fe00, 0xc000120018, 0x0, 0xc0007e4180, 0x5, 0xc000d2a030, 0x24, 0x0, 0xc0008baa20, ...)
	Feb 10 13:28:54 old-k8s-version-745712 kubelet[5534]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Feb 10 13:28:54 old-k8s-version-745712 kubelet[5534]: net/http.(*Transport).dialConnFor(0xc0008ee140, 0xc000bf56b0)
	Feb 10 13:28:54 old-k8s-version-745712 kubelet[5534]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Feb 10 13:28:54 old-k8s-version-745712 kubelet[5534]: created by net/http.(*Transport).queueForDial
	Feb 10 13:28:54 old-k8s-version-745712 kubelet[5534]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Feb 10 13:28:54 old-k8s-version-745712 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 10 13:28:54 old-k8s-version-745712 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 10 13:28:55 old-k8s-version-745712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Feb 10 13:28:55 old-k8s-version-745712 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 10 13:28:55 old-k8s-version-745712 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 10 13:28:55 old-k8s-version-745712 kubelet[5599]: I0210 13:28:55.283847    5599 server.go:416] Version: v1.20.0
	Feb 10 13:28:55 old-k8s-version-745712 kubelet[5599]: I0210 13:28:55.284146    5599 server.go:837] Client rotation is on, will bootstrap in background
	Feb 10 13:28:55 old-k8s-version-745712 kubelet[5599]: I0210 13:28:55.285950    5599 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 10 13:28:55 old-k8s-version-745712 kubelet[5599]: I0210 13:28:55.287104    5599 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Feb 10 13:28:55 old-k8s-version-745712 kubelet[5599]: W0210 13:28:55.287269    5599 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-745712 -n old-k8s-version-745712
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-745712 -n old-k8s-version-745712: exit status 2 (226.665589ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-745712" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (511.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:29:17.852722  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/no-preload-112306/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:29:44.144673  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:29:55.623364  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/bridge-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:30:46.486164  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:31:15.235556  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/default-k8s-diff-port-957542/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:31:15.241939  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/default-k8s-diff-port-957542/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:31:15.253265  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/default-k8s-diff-port-957542/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:31:15.274651  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/default-k8s-diff-port-957542/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:31:15.316123  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/default-k8s-diff-port-957542/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:31:15.397580  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/default-k8s-diff-port-957542/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:31:15.559140  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/default-k8s-diff-port-957542/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:31:15.880838  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/default-k8s-diff-port-957542/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:31:16.522517  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/default-k8s-diff-port-957542/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:31:17.804278  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/default-k8s-diff-port-957542/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:31:20.365571  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/default-k8s-diff-port-957542/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:31:25.486998  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/default-k8s-diff-port-957542/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:31:33.991129  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/no-preload-112306/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:31:35.729006  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/default-k8s-diff-port-957542/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:31:36.421453  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:31:53.359757  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:31:56.211156  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/default-k8s-diff-port-957542/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:32:01.694667  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/no-preload-112306/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:32:36.337602  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:32:36.776757  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:32:37.172500  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/default-k8s-diff-port-957542/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:32:59.483824  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:33:16.425392  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:33:17.603300  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:33:52.929173  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:33:59.094761  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/default-k8s-diff-port-957542/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:33:59.840567  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:34:40.669623  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:34:44.144244  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:34:55.623297  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/bridge-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:35:15.996142  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:35:39.423136  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:35:46.485563  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:36:07.208190  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:36:15.234536  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/default-k8s-diff-port-957542/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:36:18.685778  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/bridge-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:36:33.990236  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/no-preload-112306/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:36:36.421122  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:36:42.936990  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/default-k8s-diff-port-957542/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:36:53.359270  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:37:36.337621  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:37:36.776818  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-745712 -n old-k8s-version-745712
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-745712 -n old-k8s-version-745712: exit status 2 (238.915283ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-745712" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-745712 -n old-k8s-version-745712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-745712 -n old-k8s-version-745712: exit status 2 (226.094506ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-745712 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | no-preload-112306 image list                           | no-preload-112306            | jenkins | v1.35.0 | 10 Feb 25 13:23 UTC | 10 Feb 25 13:23 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-112306                                   | no-preload-112306            | jenkins | v1.35.0 | 10 Feb 25 13:23 UTC | 10 Feb 25 13:23 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-112306                                   | no-preload-112306            | jenkins | v1.35.0 | 10 Feb 25 13:23 UTC | 10 Feb 25 13:23 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-112306                                   | no-preload-112306            | jenkins | v1.35.0 | 10 Feb 25 13:23 UTC | 10 Feb 25 13:23 UTC |
	| delete  | -p no-preload-112306                                   | no-preload-112306            | jenkins | v1.35.0 | 10 Feb 25 13:23 UTC | 10 Feb 25 13:23 UTC |
	| start   | -p newest-cni-078760 --memory=2200 --alsologtostderr   | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:23 UTC | 10 Feb 25 13:24 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | embed-certs-396582 image list                          | embed-certs-396582           | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-396582                                  | embed-certs-396582           | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-396582                                  | embed-certs-396582           | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-396582                                  | embed-certs-396582           | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	| delete  | -p embed-certs-396582                                  | embed-certs-396582           | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	| addons  | enable metrics-server -p newest-cni-078760             | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-078760                                   | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-078760                  | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-078760 --memory=2200 --alsologtostderr   | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:25 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-078760 image list                           | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:25 UTC | 10 Feb 25 13:25 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-078760                                   | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:25 UTC | 10 Feb 25 13:25 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-078760                                   | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:25 UTC | 10 Feb 25 13:25 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-078760                                   | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:25 UTC | 10 Feb 25 13:25 UTC |
	| delete  | -p newest-cni-078760                                   | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:25 UTC | 10 Feb 25 13:25 UTC |
	| image   | default-k8s-diff-port-957542                           | default-k8s-diff-port-957542 | jenkins | v1.35.0 | 10 Feb 25 13:28 UTC | 10 Feb 25 13:28 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-957542 | jenkins | v1.35.0 | 10 Feb 25 13:28 UTC | 10 Feb 25 13:28 UTC |
	|         | default-k8s-diff-port-957542                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-957542 | jenkins | v1.35.0 | 10 Feb 25 13:28 UTC | 10 Feb 25 13:28 UTC |
	|         | default-k8s-diff-port-957542                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-957542 | jenkins | v1.35.0 | 10 Feb 25 13:28 UTC | 10 Feb 25 13:28 UTC |
	|         | default-k8s-diff-port-957542                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-957542 | jenkins | v1.35.0 | 10 Feb 25 13:28 UTC | 10 Feb 25 13:28 UTC |
	|         | default-k8s-diff-port-957542                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 13:24:41
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 13:24:41.261359  691489 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:24:41.261536  691489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:24:41.261547  691489 out.go:358] Setting ErrFile to fd 2...
	I0210 13:24:41.261554  691489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:24:41.261746  691489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
	I0210 13:24:41.262302  691489 out.go:352] Setting JSON to false
	I0210 13:24:41.263380  691489 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":18431,"bootTime":1739175450,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 13:24:41.263451  691489 start.go:139] virtualization: kvm guest
	I0210 13:24:41.265793  691489 out.go:177] * [newest-cni-078760] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 13:24:41.267418  691489 notify.go:220] Checking for updates...
	I0210 13:24:41.267458  691489 out.go:177]   - MINIKUBE_LOCATION=20383
	I0210 13:24:41.268698  691489 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 13:24:41.270028  691489 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:24:41.271343  691489 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	I0210 13:24:41.272529  691489 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 13:24:41.273658  691489 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 13:24:41.275235  691489 config.go:182] Loaded profile config "newest-cni-078760": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:24:41.275676  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:24:41.275733  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:24:41.291098  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34559
	I0210 13:24:41.291639  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:24:41.292262  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:24:41.292292  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:24:41.292606  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:24:41.292771  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:24:41.292989  691489 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 13:24:41.293438  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:24:41.293515  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:24:41.308113  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38677
	I0210 13:24:41.308493  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:24:41.308908  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:24:41.308925  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:24:41.309289  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:24:41.309516  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:24:41.345364  691489 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 13:24:41.346519  691489 start.go:297] selected driver: kvm2
	I0210 13:24:41.346533  691489 start.go:901] validating driver "kvm2" against &{Name:newest-cni-078760 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-078760 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Net
work: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:24:41.346634  691489 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 13:24:41.347359  691489 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:24:41.347444  691489 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20383-625153/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 13:24:41.361853  691489 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 13:24:41.362275  691489 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0210 13:24:41.362308  691489 cni.go:84] Creating CNI manager for ""
	I0210 13:24:41.362373  691489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:24:41.362421  691489 start.go:340] cluster config:
	{Name:newest-cni-078760 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-078760 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:24:41.362555  691489 iso.go:125] acquiring lock: {Name:mk013d189757e85c0699014f5ef29205d9d4927f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:24:41.365015  691489 out.go:177] * Starting "newest-cni-078760" primary control-plane node in "newest-cni-078760" cluster
	I0210 13:24:41.366217  691489 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 13:24:41.366274  691489 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0210 13:24:41.366291  691489 cache.go:56] Caching tarball of preloaded images
	I0210 13:24:41.366377  691489 preload.go:172] Found /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 13:24:41.366391  691489 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0210 13:24:41.366538  691489 profile.go:143] Saving config to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/config.json ...
	I0210 13:24:41.366777  691489 start.go:360] acquireMachinesLock for newest-cni-078760: {Name:mk28e87da66de739a4c7c70d1fb5afc4ce31a4d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 13:24:41.366839  691489 start.go:364] duration metric: took 35.147µs to acquireMachinesLock for "newest-cni-078760"
	I0210 13:24:41.366859  691489 start.go:96] Skipping create...Using existing machine configuration
	I0210 13:24:41.366868  691489 fix.go:54] fixHost starting: 
	I0210 13:24:41.367244  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:24:41.367288  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:24:41.381304  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38575
	I0210 13:24:41.381768  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:24:41.382361  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:24:41.382386  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:24:41.382722  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:24:41.382913  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:24:41.383081  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetState
	I0210 13:24:41.385267  691489 fix.go:112] recreateIfNeeded on newest-cni-078760: state=Stopped err=<nil>
	I0210 13:24:41.385305  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	W0210 13:24:41.385473  691489 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 13:24:41.387457  691489 out.go:177] * Restarting existing kvm2 VM for "newest-cni-078760" ...
	I0210 13:24:39.769831  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:41.770142  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:40.661417  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:40.673492  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:40.673565  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:40.704651  688914 cri.go:89] found id: ""
	I0210 13:24:40.704682  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.704691  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:40.704698  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:40.704757  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:40.738312  688914 cri.go:89] found id: ""
	I0210 13:24:40.738340  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.738348  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:40.738355  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:40.738427  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:40.770358  688914 cri.go:89] found id: ""
	I0210 13:24:40.770392  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.770404  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:40.770413  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:40.770483  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:40.806743  688914 cri.go:89] found id: ""
	I0210 13:24:40.806777  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.806789  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:40.806797  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:40.806856  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:40.838580  688914 cri.go:89] found id: ""
	I0210 13:24:40.838614  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.838626  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:40.838643  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:40.838715  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:40.869410  688914 cri.go:89] found id: ""
	I0210 13:24:40.869441  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.869449  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:40.869456  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:40.869520  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:40.903978  688914 cri.go:89] found id: ""
	I0210 13:24:40.904005  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.904014  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:40.904019  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:40.904086  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:40.937376  688914 cri.go:89] found id: ""
	I0210 13:24:40.937408  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.937416  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:40.937426  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:40.937444  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:40.987586  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:40.987628  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:41.000596  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:41.000625  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:41.075352  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:41.075376  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:41.075396  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:41.155409  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:41.155441  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:43.696222  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:43.709019  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:43.709115  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:43.741277  688914 cri.go:89] found id: ""
	I0210 13:24:43.741309  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.741319  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:43.741328  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:43.741393  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:43.780217  688914 cri.go:89] found id: ""
	I0210 13:24:43.780248  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.780259  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:43.780267  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:43.780326  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:43.818627  688914 cri.go:89] found id: ""
	I0210 13:24:43.818660  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.818673  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:43.818681  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:43.818747  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:43.855216  688914 cri.go:89] found id: ""
	I0210 13:24:43.855248  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.855258  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:43.855266  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:43.855331  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:43.889360  688914 cri.go:89] found id: ""
	I0210 13:24:43.889394  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.889402  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:43.889410  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:43.889476  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:43.934224  688914 cri.go:89] found id: ""
	I0210 13:24:43.934258  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.934266  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:43.934273  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:43.934329  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:43.974800  688914 cri.go:89] found id: ""
	I0210 13:24:43.974830  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.974837  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:43.974844  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:43.974897  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:44.017085  688914 cri.go:89] found id: ""
	I0210 13:24:44.017128  688914 logs.go:282] 0 containers: []
	W0210 13:24:44.017139  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:44.017152  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:44.017171  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:44.067430  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:44.067470  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:44.081581  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:44.081618  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:44.153720  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:44.153743  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:44.153810  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:44.235557  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:44.235597  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:41.388557  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Start
	I0210 13:24:41.388729  691489 main.go:141] libmachine: (newest-cni-078760) starting domain...
	I0210 13:24:41.388749  691489 main.go:141] libmachine: (newest-cni-078760) ensuring networks are active...
	I0210 13:24:41.389682  691489 main.go:141] libmachine: (newest-cni-078760) Ensuring network default is active
	I0210 13:24:41.390063  691489 main.go:141] libmachine: (newest-cni-078760) Ensuring network mk-newest-cni-078760 is active
	I0210 13:24:41.390463  691489 main.go:141] libmachine: (newest-cni-078760) getting domain XML...
	I0210 13:24:41.391221  691489 main.go:141] libmachine: (newest-cni-078760) creating domain...
	I0210 13:24:42.616334  691489 main.go:141] libmachine: (newest-cni-078760) waiting for IP...
	I0210 13:24:42.617299  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:42.617829  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:42.617918  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:42.617824  691524 retry.go:31] will retry after 283.264685ms: waiting for domain to come up
	I0210 13:24:42.903325  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:42.904000  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:42.904028  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:42.903933  691524 retry.go:31] will retry after 344.515197ms: waiting for domain to come up
	I0210 13:24:43.250750  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:43.251374  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:43.251425  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:43.251339  691524 retry.go:31] will retry after 393.453533ms: waiting for domain to come up
	I0210 13:24:43.646892  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:43.647502  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:43.647530  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:43.647479  691524 retry.go:31] will retry after 372.747782ms: waiting for domain to come up
	I0210 13:24:44.022175  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:44.022720  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:44.022762  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:44.022643  691524 retry.go:31] will retry after 498.159478ms: waiting for domain to come up
	I0210 13:24:44.522570  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:44.523198  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:44.523228  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:44.523153  691524 retry.go:31] will retry after 604.957125ms: waiting for domain to come up
	I0210 13:24:45.129970  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:45.130451  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:45.130473  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:45.130420  691524 retry.go:31] will retry after 898.332464ms: waiting for domain to come up
	I0210 13:24:46.030650  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:46.031180  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:46.031209  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:46.031128  691524 retry.go:31] will retry after 1.265422975s: waiting for domain to come up
	I0210 13:24:44.271495  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:46.770352  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:46.773208  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:46.785471  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:46.785541  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:46.819010  688914 cri.go:89] found id: ""
	I0210 13:24:46.819043  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.819053  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:46.819061  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:46.819125  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:46.851361  688914 cri.go:89] found id: ""
	I0210 13:24:46.851395  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.851408  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:46.851416  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:46.851489  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:46.887040  688914 cri.go:89] found id: ""
	I0210 13:24:46.887074  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.887086  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:46.887094  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:46.887159  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:46.919719  688914 cri.go:89] found id: ""
	I0210 13:24:46.919752  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.919763  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:46.919780  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:46.919854  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:46.962383  688914 cri.go:89] found id: ""
	I0210 13:24:46.962416  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.962429  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:46.962438  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:46.962510  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:46.997529  688914 cri.go:89] found id: ""
	I0210 13:24:46.997558  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.997567  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:46.997573  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:46.997624  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:47.034666  688914 cri.go:89] found id: ""
	I0210 13:24:47.034698  688914 logs.go:282] 0 containers: []
	W0210 13:24:47.034709  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:47.034717  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:47.034772  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:47.072750  688914 cri.go:89] found id: ""
	I0210 13:24:47.072780  688914 logs.go:282] 0 containers: []
	W0210 13:24:47.072788  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:47.072799  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:47.072811  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:47.126909  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:47.126946  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:47.139755  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:47.139783  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:47.207327  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:47.207369  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:47.207395  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:47.296476  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:47.296530  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:49.839781  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:49.852562  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:49.852630  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:49.887112  688914 cri.go:89] found id: ""
	I0210 13:24:49.887146  688914 logs.go:282] 0 containers: []
	W0210 13:24:49.887160  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:49.887179  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:49.887245  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:49.920850  688914 cri.go:89] found id: ""
	I0210 13:24:49.920878  688914 logs.go:282] 0 containers: []
	W0210 13:24:49.920885  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:49.920891  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:49.920944  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:49.950969  688914 cri.go:89] found id: ""
	I0210 13:24:49.951002  688914 logs.go:282] 0 containers: []
	W0210 13:24:49.951010  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:49.951017  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:49.951074  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:49.985312  688914 cri.go:89] found id: ""
	I0210 13:24:49.985341  688914 logs.go:282] 0 containers: []
	W0210 13:24:49.985350  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:49.985357  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:49.985420  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:50.022609  688914 cri.go:89] found id: ""
	I0210 13:24:50.022643  688914 logs.go:282] 0 containers: []
	W0210 13:24:50.022654  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:50.022662  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:50.022741  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:50.060874  688914 cri.go:89] found id: ""
	I0210 13:24:50.060910  688914 logs.go:282] 0 containers: []
	W0210 13:24:50.060921  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:50.060928  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:50.060995  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:50.105868  688914 cri.go:89] found id: ""
	I0210 13:24:50.105904  688914 logs.go:282] 0 containers: []
	W0210 13:24:50.105916  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:50.105924  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:50.105987  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:47.297831  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:47.298426  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:47.298458  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:47.298379  691524 retry.go:31] will retry after 1.501368767s: waiting for domain to come up
	I0210 13:24:48.802064  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:48.802681  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:48.802713  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:48.802644  691524 retry.go:31] will retry after 1.952900788s: waiting for domain to come up
	I0210 13:24:50.757205  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:50.757657  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:50.757681  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:50.757634  691524 retry.go:31] will retry after 2.841299634s: waiting for domain to come up
	I0210 13:24:48.770842  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:50.771415  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:50.143929  688914 cri.go:89] found id: ""
	I0210 13:24:50.143961  688914 logs.go:282] 0 containers: []
	W0210 13:24:50.143980  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:50.143990  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:50.144006  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:50.205049  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:50.205092  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:50.224083  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:50.224118  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:50.291786  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:50.291812  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:50.291831  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:50.371326  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:50.371371  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:52.919235  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:52.937153  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:52.937253  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:52.969532  688914 cri.go:89] found id: ""
	I0210 13:24:52.969567  688914 logs.go:282] 0 containers: []
	W0210 13:24:52.969578  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:52.969586  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:52.969647  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:53.002238  688914 cri.go:89] found id: ""
	I0210 13:24:53.002269  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.002280  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:53.002287  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:53.002362  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:53.035346  688914 cri.go:89] found id: ""
	I0210 13:24:53.035376  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.035384  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:53.035392  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:53.035461  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:53.072805  688914 cri.go:89] found id: ""
	I0210 13:24:53.072897  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.072916  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:53.072926  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:53.073004  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:53.110660  688914 cri.go:89] found id: ""
	I0210 13:24:53.110691  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.110702  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:53.110712  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:53.110780  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:53.147192  688914 cri.go:89] found id: ""
	I0210 13:24:53.147222  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.147233  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:53.147242  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:53.147309  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:53.182225  688914 cri.go:89] found id: ""
	I0210 13:24:53.182260  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.182272  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:53.182280  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:53.182356  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:53.222558  688914 cri.go:89] found id: ""
	I0210 13:24:53.222590  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.222601  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:53.222614  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:53.222630  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:53.279358  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:53.279408  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:53.294748  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:53.294787  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:53.369719  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:53.369745  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:53.369762  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:53.451596  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:53.451639  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:53.601402  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:53.601912  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:53.601961  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:53.601883  691524 retry.go:31] will retry after 2.542274821s: waiting for domain to come up
	I0210 13:24:56.146274  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:56.146832  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:56.146863  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:56.146790  691524 retry.go:31] will retry after 3.125209956s: waiting for domain to come up
	I0210 13:24:52.779375  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:55.269617  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:57.271040  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:55.993228  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:56.005645  688914 kubeadm.go:597] duration metric: took 4m2.60696863s to restartPrimaryControlPlane
	W0210 13:24:56.005721  688914 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0210 13:24:56.005746  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 13:24:56.513498  688914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:24:56.526951  688914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 13:24:56.536360  688914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:24:56.544989  688914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:24:56.545005  688914 kubeadm.go:157] found existing configuration files:
	
	I0210 13:24:56.545053  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:24:56.553248  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:24:56.553299  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:24:56.562196  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:24:56.570708  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:24:56.570756  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:24:56.580086  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:24:56.588161  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:24:56.588207  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:24:56.596487  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:24:56.604340  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:24:56.604385  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:24:56.612499  688914 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 13:24:56.823209  688914 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 13:24:59.274113  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.274657  691489 main.go:141] libmachine: (newest-cni-078760) found domain IP: 192.168.39.250
	I0210 13:24:59.274689  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has current primary IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.274697  691489 main.go:141] libmachine: (newest-cni-078760) reserving static IP address...
	I0210 13:24:59.275163  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "newest-cni-078760", mac: "52:54:00:6b:a1:b8", ip: "192.168.39.250"} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.275200  691489 main.go:141] libmachine: (newest-cni-078760) DBG | skip adding static IP to network mk-newest-cni-078760 - found existing host DHCP lease matching {name: "newest-cni-078760", mac: "52:54:00:6b:a1:b8", ip: "192.168.39.250"}
	I0210 13:24:59.275212  691489 main.go:141] libmachine: (newest-cni-078760) reserved static IP address 192.168.39.250 for domain newest-cni-078760
	I0210 13:24:59.275224  691489 main.go:141] libmachine: (newest-cni-078760) waiting for SSH...
	I0210 13:24:59.275240  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Getting to WaitForSSH function...
	I0210 13:24:59.277564  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.277937  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.277972  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.278049  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Using SSH client type: external
	I0210 13:24:59.278098  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Using SSH private key: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa (-rw-------)
	I0210 13:24:59.278150  691489 main.go:141] libmachine: (newest-cni-078760) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 13:24:59.278164  691489 main.go:141] libmachine: (newest-cni-078760) DBG | About to run SSH command:
	I0210 13:24:59.278172  691489 main.go:141] libmachine: (newest-cni-078760) DBG | exit 0
	I0210 13:24:59.405034  691489 main.go:141] libmachine: (newest-cni-078760) DBG | SSH cmd err, output: <nil>: 
	I0210 13:24:59.405508  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetConfigRaw
	I0210 13:24:59.406149  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetIP
	I0210 13:24:59.408696  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.409061  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.409097  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.409422  691489 profile.go:143] Saving config to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/config.json ...
	I0210 13:24:59.409617  691489 machine.go:93] provisionDockerMachine start ...
	I0210 13:24:59.409635  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:24:59.409892  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:24:59.412202  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.412549  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.412570  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.412770  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:24:59.412949  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:24:59.413066  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:24:59.413229  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:24:59.413383  691489 main.go:141] libmachine: Using SSH client type: native
	I0210 13:24:59.413675  691489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0210 13:24:59.413693  691489 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 13:24:59.520985  691489 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 13:24:59.521014  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetMachineName
	I0210 13:24:59.521304  691489 buildroot.go:166] provisioning hostname "newest-cni-078760"
	I0210 13:24:59.521348  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetMachineName
	I0210 13:24:59.521546  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:24:59.524011  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.524395  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.524426  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.524511  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:24:59.524677  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:24:59.524830  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:24:59.524930  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:24:59.525090  691489 main.go:141] libmachine: Using SSH client type: native
	I0210 13:24:59.525301  691489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0210 13:24:59.525317  691489 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-078760 && echo "newest-cni-078760" | sudo tee /etc/hostname
	I0210 13:24:59.646397  691489 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-078760
	
	I0210 13:24:59.646428  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:24:59.649460  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.649855  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.649887  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.650122  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:24:59.650345  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:24:59.650510  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:24:59.650661  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:24:59.650865  691489 main.go:141] libmachine: Using SSH client type: native
	I0210 13:24:59.651057  691489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0210 13:24:59.651075  691489 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-078760' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-078760/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-078760' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 13:24:59.765308  691489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 13:24:59.765347  691489 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20383-625153/.minikube CaCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20383-625153/.minikube}
	I0210 13:24:59.765387  691489 buildroot.go:174] setting up certificates
	I0210 13:24:59.765401  691489 provision.go:84] configureAuth start
	I0210 13:24:59.765424  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetMachineName
	I0210 13:24:59.765729  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetIP
	I0210 13:24:59.768971  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.769366  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.769391  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.769640  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:24:59.772244  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.772630  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.772667  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.772825  691489 provision.go:143] copyHostCerts
	I0210 13:24:59.772893  691489 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem, removing ...
	I0210 13:24:59.772903  691489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem
	I0210 13:24:59.772968  691489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem (1082 bytes)
	I0210 13:24:59.773076  691489 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem, removing ...
	I0210 13:24:59.773084  691489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem
	I0210 13:24:59.773148  691489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem (1123 bytes)
	I0210 13:24:59.773228  691489 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem, removing ...
	I0210 13:24:59.773236  691489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem
	I0210 13:24:59.773260  691489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem (1675 bytes)
	I0210 13:24:59.773329  691489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem org=jenkins.newest-cni-078760 san=[127.0.0.1 192.168.39.250 localhost minikube newest-cni-078760]
	I0210 13:25:00.289725  691489 provision.go:177] copyRemoteCerts
	I0210 13:25:00.289790  691489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 13:25:00.289817  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:00.292758  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.293115  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:00.293149  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.293357  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:00.293603  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.293811  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:00.293957  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:00.383066  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0210 13:25:00.405672  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 13:25:00.428091  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 13:25:00.448809  691489 provision.go:87] duration metric: took 683.388073ms to configureAuth
	I0210 13:25:00.448837  691489 buildroot.go:189] setting minikube options for container-runtime
	I0210 13:25:00.449011  691489 config.go:182] Loaded profile config "newest-cni-078760": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:25:00.449092  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:00.451834  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.452228  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:00.452255  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.452441  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:00.452649  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.452807  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.452911  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:00.453073  691489 main.go:141] libmachine: Using SSH client type: native
	I0210 13:25:00.453278  691489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0210 13:25:00.453302  691489 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 13:25:00.672251  691489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 13:25:00.672293  691489 machine.go:96] duration metric: took 1.262661195s to provisionDockerMachine
	I0210 13:25:00.672311  691489 start.go:293] postStartSetup for "newest-cni-078760" (driver="kvm2")
	I0210 13:25:00.672325  691489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 13:25:00.672351  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:00.672711  691489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 13:25:00.672751  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:00.675260  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.675668  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:00.675700  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.675807  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:00.675998  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.676205  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:00.676346  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:00.758840  691489 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 13:25:00.762542  691489 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 13:25:00.762567  691489 filesync.go:126] Scanning /home/jenkins/minikube-integration/20383-625153/.minikube/addons for local assets ...
	I0210 13:25:00.762639  691489 filesync.go:126] Scanning /home/jenkins/minikube-integration/20383-625153/.minikube/files for local assets ...
	I0210 13:25:00.762734  691489 filesync.go:149] local asset: /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem -> 6323522.pem in /etc/ssl/certs
	I0210 13:25:00.762860  691489 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 13:25:00.773351  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem --> /etc/ssl/certs/6323522.pem (1708 bytes)
	I0210 13:25:00.796618  691489 start.go:296] duration metric: took 124.2886ms for postStartSetup
	I0210 13:25:00.796673  691489 fix.go:56] duration metric: took 19.429804907s for fixHost
	I0210 13:25:00.796697  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:00.799632  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.799962  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:00.799989  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.800218  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:00.800405  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.800535  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.800642  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:00.800769  691489 main.go:141] libmachine: Using SSH client type: native
	I0210 13:25:00.800931  691489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0210 13:25:00.800941  691489 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 13:25:00.909435  691489 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739193900.883827731
	
	I0210 13:25:00.909467  691489 fix.go:216] guest clock: 1739193900.883827731
	I0210 13:25:00.909475  691489 fix.go:229] Guest: 2025-02-10 13:25:00.883827731 +0000 UTC Remote: 2025-02-10 13:25:00.796678487 +0000 UTC m=+19.572875336 (delta=87.149244ms)
	I0210 13:25:00.909527  691489 fix.go:200] guest clock delta is within tolerance: 87.149244ms
	I0210 13:25:00.909539  691489 start.go:83] releasing machines lock for "newest-cni-078760", held for 19.542688037s
	I0210 13:25:00.909575  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:00.909866  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetIP
	I0210 13:25:00.912692  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.913180  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:00.913209  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.913393  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:00.913968  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:00.914173  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:00.914234  691489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 13:25:00.914286  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:00.914386  691489 ssh_runner.go:195] Run: cat /version.json
	I0210 13:25:00.914413  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:00.917197  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.917270  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.917549  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:00.917577  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.917603  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:00.917618  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.917755  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:00.917938  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:00.917969  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.918181  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:00.918186  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.918323  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:00.918506  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:00.918627  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:01.016816  691489 ssh_runner.go:195] Run: systemctl --version
	I0210 13:25:01.022398  691489 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 13:25:01.160711  691489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 13:25:01.166231  691489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 13:25:01.166308  691489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 13:25:01.181307  691489 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 13:25:01.181340  691489 start.go:495] detecting cgroup driver to use...
	I0210 13:25:01.181432  691489 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 13:25:01.196599  691489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 13:25:01.210368  691489 docker.go:217] disabling cri-docker service (if available) ...
	I0210 13:25:01.210447  691489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 13:25:01.224277  691489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 13:25:01.237050  691489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 13:25:01.363079  691489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 13:25:01.505721  691489 docker.go:233] disabling docker service ...
	I0210 13:25:01.505798  691489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 13:25:01.519404  691489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 13:25:01.531569  691489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 13:25:01.656701  691489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 13:25:01.761785  691489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 13:25:01.775504  691489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 13:25:01.793265  691489 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0210 13:25:01.793350  691489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:25:01.802631  691489 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 13:25:01.802704  691489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:25:01.811794  691489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:25:01.821081  691489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:25:01.830115  691489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 13:25:01.839351  691489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:25:01.848567  691489 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:25:01.864326  691489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:25:01.874772  691489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 13:25:01.884394  691489 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 13:25:01.884474  691489 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 13:25:01.897647  691489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 13:25:01.906297  691489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:25:02.014414  691489 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 13:25:02.104325  691489 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 13:25:02.104434  691489 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 13:25:02.108842  691489 start.go:563] Will wait 60s for crictl version
	I0210 13:25:02.108917  691489 ssh_runner.go:195] Run: which crictl
	I0210 13:25:02.112360  691489 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 13:25:02.153660  691489 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 13:25:02.153771  691489 ssh_runner.go:195] Run: crio --version
	I0210 13:25:02.180774  691489 ssh_runner.go:195] Run: crio --version
	I0210 13:25:02.212419  691489 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0210 13:25:02.213655  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetIP
	I0210 13:25:02.216337  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:02.216703  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:02.216731  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:02.217046  691489 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0210 13:25:02.221017  691489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:25:02.234095  691489 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0210 13:24:59.770976  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:02.273787  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:02.235371  691489 kubeadm.go:883] updating cluster {Name:newest-cni-078760 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-078760 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 13:25:02.235495  691489 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 13:25:02.235552  691489 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:25:02.269571  691489 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0210 13:25:02.269654  691489 ssh_runner.go:195] Run: which lz4
	I0210 13:25:02.273617  691489 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 13:25:02.277988  691489 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 13:25:02.278024  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0210 13:25:03.523616  691489 crio.go:462] duration metric: took 1.250045789s to copy over tarball
	I0210 13:25:03.523702  691489 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 13:25:05.658254  691489 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.134495502s)
	I0210 13:25:05.658291  691489 crio.go:469] duration metric: took 2.134641092s to extract the tarball
	I0210 13:25:05.658303  691489 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 13:25:05.695477  691489 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:25:05.735472  691489 crio.go:514] all images are preloaded for cri-o runtime.
	I0210 13:25:05.735496  691489 cache_images.go:84] Images are preloaded, skipping loading
	I0210 13:25:05.735505  691489 kubeadm.go:934] updating node { 192.168.39.250 8443 v1.32.1 crio true true} ...
	I0210 13:25:05.735610  691489 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-078760 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-078760 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 13:25:05.735681  691489 ssh_runner.go:195] Run: crio config
	I0210 13:25:05.785195  691489 cni.go:84] Creating CNI manager for ""
	I0210 13:25:05.785224  691489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:25:05.785234  691489 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0210 13:25:05.785263  691489 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.250 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-078760 NodeName:newest-cni-078760 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 13:25:05.785425  691489 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-078760"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.250"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.250"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 13:25:05.785511  691489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 13:25:05.794956  691489 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 13:25:05.795032  691489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 13:25:05.804169  691489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0210 13:25:05.819782  691489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 13:25:05.835103  691489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0210 13:25:05.851153  691489 ssh_runner.go:195] Run: grep 192.168.39.250	control-plane.minikube.internal$ /etc/hosts
	I0210 13:25:05.854677  691489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:25:05.865911  691489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:25:05.995134  691489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:25:06.017449  691489 certs.go:68] Setting up /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760 for IP: 192.168.39.250
	I0210 13:25:06.017475  691489 certs.go:194] generating shared ca certs ...
	I0210 13:25:06.017497  691489 certs.go:226] acquiring lock for ca certs: {Name:mkf2f72f82a6bd7e3c16bb224cd26b80c3c89e0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:25:06.017658  691489 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key
	I0210 13:25:06.017711  691489 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key
	I0210 13:25:06.017726  691489 certs.go:256] generating profile certs ...
	I0210 13:25:06.017814  691489 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/client.key
	I0210 13:25:06.017907  691489 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/apiserver.key.1c0773a6
	I0210 13:25:06.017962  691489 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/proxy-client.key
	I0210 13:25:06.018106  691489 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352.pem (1338 bytes)
	W0210 13:25:06.018145  691489 certs.go:480] ignoring /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352_empty.pem, impossibly tiny 0 bytes
	I0210 13:25:06.018160  691489 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem (1675 bytes)
	I0210 13:25:06.018194  691489 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem (1082 bytes)
	I0210 13:25:06.018255  691489 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem (1123 bytes)
	I0210 13:25:06.018301  691489 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem (1675 bytes)
	I0210 13:25:06.018360  691489 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem (1708 bytes)
	I0210 13:25:06.019219  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 13:25:06.049870  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 13:25:06.079056  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 13:25:06.111520  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 13:25:06.144808  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0210 13:25:06.170435  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 13:25:06.193477  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 13:25:06.216083  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0210 13:25:06.237420  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem --> /usr/share/ca-certificates/6323522.pem (1708 bytes)
	I0210 13:25:06.259080  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 13:25:04.771284  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:07.270419  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:06.281857  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352.pem --> /usr/share/ca-certificates/632352.pem (1338 bytes)
	I0210 13:25:06.303749  691489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 13:25:06.319343  691489 ssh_runner.go:195] Run: openssl version
	I0210 13:25:06.324961  691489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/632352.pem && ln -fs /usr/share/ca-certificates/632352.pem /etc/ssl/certs/632352.pem"
	I0210 13:25:06.334777  691489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/632352.pem
	I0210 13:25:06.338786  691489 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 12:13 /usr/share/ca-certificates/632352.pem
	I0210 13:25:06.338851  691489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/632352.pem
	I0210 13:25:06.344301  691489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/632352.pem /etc/ssl/certs/51391683.0"
	I0210 13:25:06.354153  691489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6323522.pem && ln -fs /usr/share/ca-certificates/6323522.pem /etc/ssl/certs/6323522.pem"
	I0210 13:25:06.363691  691489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6323522.pem
	I0210 13:25:06.367845  691489 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 12:13 /usr/share/ca-certificates/6323522.pem
	I0210 13:25:06.367903  691489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6323522.pem
	I0210 13:25:06.373065  691489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6323522.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 13:25:06.382808  691489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 13:25:06.392603  691489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:25:06.396500  691489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:25:06.396554  691489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:25:06.401622  691489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 13:25:06.411181  691489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 13:25:06.415359  691489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 13:25:06.420593  691489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 13:25:06.426061  691489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 13:25:06.431327  691489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 13:25:06.436533  691489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 13:25:06.441660  691489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 13:25:06.446816  691489 kubeadm.go:392] StartCluster: {Name:newest-cni-078760 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-078760 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mult
iNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:25:06.446895  691489 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 13:25:06.446930  691489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:25:06.483125  691489 cri.go:89] found id: ""
	I0210 13:25:06.483211  691489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 13:25:06.493195  691489 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0210 13:25:06.493227  691489 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0210 13:25:06.493279  691489 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0210 13:25:06.502619  691489 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0210 13:25:06.503337  691489 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-078760" does not appear in /home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:25:06.503714  691489 kubeconfig.go:62] /home/jenkins/minikube-integration/20383-625153/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-078760" cluster setting kubeconfig missing "newest-cni-078760" context setting]
	I0210 13:25:06.504205  691489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/kubeconfig: {Name:mke7ef1ff4ff1259856291979fdd0337df3a08b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:25:06.505630  691489 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0210 13:25:06.514911  691489 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.250
	I0210 13:25:06.514960  691489 kubeadm.go:1160] stopping kube-system containers ...
	I0210 13:25:06.514977  691489 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0210 13:25:06.515037  691489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:25:06.554131  691489 cri.go:89] found id: ""
	I0210 13:25:06.554214  691489 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0210 13:25:06.570574  691489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:25:06.579872  691489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:25:06.579894  691489 kubeadm.go:157] found existing configuration files:
	
	I0210 13:25:06.579940  691489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:25:06.588189  691489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:25:06.588248  691489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:25:06.596978  691489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:25:06.605371  691489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:25:06.605424  691489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:25:06.613792  691489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:25:06.621620  691489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:25:06.621676  691489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:25:06.629800  691489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:25:06.637455  691489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:25:06.637496  691489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:25:06.645304  691489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 13:25:06.653346  691489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:25:06.763579  691489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:25:07.851528  691489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.087906654s)
	I0210 13:25:07.851566  691489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:25:08.057073  691489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:25:08.142252  691489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:25:08.227881  691489 api_server.go:52] waiting for apiserver process to appear ...
	I0210 13:25:08.227987  691489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:25:08.728481  691489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:25:09.228059  691489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:25:09.728607  691489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:25:10.228860  691489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:25:10.310725  691489 api_server.go:72] duration metric: took 2.082844906s to wait for apiserver process to appear ...
	I0210 13:25:10.310754  691489 api_server.go:88] waiting for apiserver healthz status ...
	I0210 13:25:10.310775  691489 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0210 13:25:10.311265  691489 api_server.go:269] stopped: https://192.168.39.250:8443/healthz: Get "https://192.168.39.250:8443/healthz": dial tcp 192.168.39.250:8443: connect: connection refused
	I0210 13:25:10.810910  691489 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0210 13:25:09.289289  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:11.769486  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:12.947266  691489 api_server.go:279] https://192.168.39.250:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 13:25:12.947307  691489 api_server.go:103] status: https://192.168.39.250:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 13:25:12.947327  691489 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0210 13:25:12.971991  691489 api_server.go:279] https://192.168.39.250:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 13:25:12.972028  691489 api_server.go:103] status: https://192.168.39.250:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 13:25:13.311219  691489 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0210 13:25:13.322624  691489 api_server.go:279] https://192.168.39.250:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 13:25:13.322653  691489 api_server.go:103] status: https://192.168.39.250:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 13:25:13.811259  691489 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0210 13:25:13.817960  691489 api_server.go:279] https://192.168.39.250:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 13:25:13.817992  691489 api_server.go:103] status: https://192.168.39.250:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 13:25:14.311715  691489 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0210 13:25:14.319786  691489 api_server.go:279] https://192.168.39.250:8443/healthz returned 200:
	ok
	I0210 13:25:14.327973  691489 api_server.go:141] control plane version: v1.32.1
	I0210 13:25:14.328010  691489 api_server.go:131] duration metric: took 4.017247642s to wait for apiserver health ...
	I0210 13:25:14.328025  691489 cni.go:84] Creating CNI manager for ""
	I0210 13:25:14.328034  691489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:25:14.330184  691489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0210 13:25:14.331476  691489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0210 13:25:14.348249  691489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0210 13:25:14.366751  691489 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 13:25:14.371867  691489 system_pods.go:59] 8 kube-system pods found
	I0210 13:25:14.371912  691489 system_pods.go:61] "coredns-668d6bf9bc-6xmgm" [e079a121-a86a-40b1-ac42-e3c1d4a45d3e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0210 13:25:14.371924  691489 system_pods.go:61] "etcd-newest-cni-078760" [ab03adeb-629d-40cc-b5a7-612855165223] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0210 13:25:14.371934  691489 system_pods.go:61] "kube-apiserver-newest-cni-078760" [d6bb0517-d5ab-4839-8974-f7c6d58dad52] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0210 13:25:14.371943  691489 system_pods.go:61] "kube-controller-manager-newest-cni-078760" [960a3334-7167-4942-8f1c-5a03ea01e628] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0210 13:25:14.371947  691489 system_pods.go:61] "kube-proxy-kd8mx" [951cb4ab-6e99-4be5-87ee-9e9c8eb4c635] Running
	I0210 13:25:14.371958  691489 system_pods.go:61] "kube-scheduler-newest-cni-078760" [bb9270e8-85d5-460e-89b5-49f374c1775d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0210 13:25:14.371964  691489 system_pods.go:61] "metrics-server-f79f97bbb-m2m4m" [9505b23a-756e-405a-a279-9e5a64082f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 13:25:14.371973  691489 system_pods.go:61] "storage-provisioner" [027d0f58-173c-4c51-86c6-461f4393192c] Running
	I0210 13:25:14.371978  691489 system_pods.go:74] duration metric: took 5.204788ms to wait for pod list to return data ...
	I0210 13:25:14.371986  691489 node_conditions.go:102] verifying NodePressure condition ...
	I0210 13:25:14.376210  691489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 13:25:14.376236  691489 node_conditions.go:123] node cpu capacity is 2
	I0210 13:25:14.376248  691489 node_conditions.go:105] duration metric: took 4.255584ms to run NodePressure ...
	I0210 13:25:14.376267  691489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:25:14.658659  691489 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 13:25:14.673616  691489 ops.go:34] apiserver oom_adj: -16
	I0210 13:25:14.673643  691489 kubeadm.go:597] duration metric: took 8.180409154s to restartPrimaryControlPlane
	I0210 13:25:14.673654  691489 kubeadm.go:394] duration metric: took 8.226850795s to StartCluster
	I0210 13:25:14.673678  691489 settings.go:142] acquiring lock: {Name:mk4bd8331d641665e48ff1d1c4382f2e915609be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:25:14.673775  691489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:25:14.674826  691489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/kubeconfig: {Name:mke7ef1ff4ff1259856291979fdd0337df3a08b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:25:14.675121  691489 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 13:25:14.675203  691489 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 13:25:14.675305  691489 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-078760"
	I0210 13:25:14.675332  691489 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-078760"
	I0210 13:25:14.675330  691489 config.go:182] Loaded profile config "newest-cni-078760": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	W0210 13:25:14.675339  691489 addons.go:247] addon storage-provisioner should already be in state true
	I0210 13:25:14.675327  691489 addons.go:69] Setting default-storageclass=true in profile "newest-cni-078760"
	I0210 13:25:14.675356  691489 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-078760"
	I0210 13:25:14.675374  691489 host.go:66] Checking if "newest-cni-078760" exists ...
	I0210 13:25:14.675362  691489 addons.go:69] Setting dashboard=true in profile "newest-cni-078760"
	I0210 13:25:14.675406  691489 addons.go:238] Setting addon dashboard=true in "newest-cni-078760"
	I0210 13:25:14.675373  691489 addons.go:69] Setting metrics-server=true in profile "newest-cni-078760"
	W0210 13:25:14.675416  691489 addons.go:247] addon dashboard should already be in state true
	I0210 13:25:14.675439  691489 addons.go:238] Setting addon metrics-server=true in "newest-cni-078760"
	W0210 13:25:14.675452  691489 addons.go:247] addon metrics-server should already be in state true
	I0210 13:25:14.675456  691489 host.go:66] Checking if "newest-cni-078760" exists ...
	I0210 13:25:14.675501  691489 host.go:66] Checking if "newest-cni-078760" exists ...
	I0210 13:25:14.675825  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.675825  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.675865  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.675949  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.675956  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.675998  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.675994  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.676030  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.676626  691489 out.go:177] * Verifying Kubernetes components...
	I0210 13:25:14.677970  691489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:25:14.692819  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41133
	I0210 13:25:14.692863  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43933
	I0210 13:25:14.693307  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.693457  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.693889  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.693917  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.694044  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.694067  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.694275  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.694467  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.694675  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetState
	I0210 13:25:14.694875  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.694910  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.695631  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39397
	I0210 13:25:14.695666  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40755
	I0210 13:25:14.696018  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.696028  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.696521  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.696541  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.696669  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.696690  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.696922  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.697247  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.697481  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.697516  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.697803  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.697850  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.698182  691489 addons.go:238] Setting addon default-storageclass=true in "newest-cni-078760"
	W0210 13:25:14.698206  691489 addons.go:247] addon default-storageclass should already be in state true
	I0210 13:25:14.698236  691489 host.go:66] Checking if "newest-cni-078760" exists ...
	I0210 13:25:14.698612  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.698664  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.713772  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40313
	I0210 13:25:14.714442  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.715026  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.715052  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.715415  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46367
	I0210 13:25:14.715437  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.715597  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetState
	I0210 13:25:14.715945  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.716483  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.716511  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.716848  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.717071  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetState
	I0210 13:25:14.717863  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36917
	I0210 13:25:14.717964  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0210 13:25:14.718191  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:14.718430  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.718536  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.718898  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:14.718993  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.719014  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.719122  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.719136  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.719353  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.719538  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.719570  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetState
	I0210 13:25:14.720089  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.720146  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.720737  691489 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0210 13:25:14.720739  691489 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0210 13:25:14.721144  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:14.722697  691489 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:25:14.722765  691489 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0210 13:25:14.722799  691489 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0210 13:25:14.722826  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:14.724344  691489 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0210 13:25:14.724481  691489 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 13:25:14.724502  691489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 13:25:14.724523  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:14.725362  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0210 13:25:14.725382  691489 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0210 13:25:14.725403  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:14.726853  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.727274  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:14.727299  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.727826  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:14.728040  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:14.728183  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:14.728402  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:14.728481  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.728865  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:14.728895  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.728973  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.729181  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:14.729432  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:14.729516  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:14.729542  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.729579  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:14.729722  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:14.729807  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:14.729972  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:14.730124  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:14.730252  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:14.765255  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33493
	I0210 13:25:14.765791  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.766387  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.766420  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.766810  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.767031  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetState
	I0210 13:25:14.768796  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:14.769012  691489 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 13:25:14.769028  691489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 13:25:14.769046  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:14.772060  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.772513  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:14.772563  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.772688  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:14.772874  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:14.773046  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:14.773224  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:14.847727  691489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:25:14.870840  691489 api_server.go:52] waiting for apiserver process to appear ...
	I0210 13:25:14.870928  691489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:25:14.886084  691489 api_server.go:72] duration metric: took 210.925044ms to wait for apiserver process to appear ...
	I0210 13:25:14.886114  691489 api_server.go:88] waiting for apiserver healthz status ...
	I0210 13:25:14.886139  691489 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0210 13:25:14.890757  691489 api_server.go:279] https://192.168.39.250:8443/healthz returned 200:
	ok
	I0210 13:25:14.891635  691489 api_server.go:141] control plane version: v1.32.1
	I0210 13:25:14.891659  691489 api_server.go:131] duration metric: took 5.538021ms to wait for apiserver health ...
	I0210 13:25:14.891667  691489 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 13:25:14.894919  691489 system_pods.go:59] 8 kube-system pods found
	I0210 13:25:14.894946  691489 system_pods.go:61] "coredns-668d6bf9bc-6xmgm" [e079a121-a86a-40b1-ac42-e3c1d4a45d3e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0210 13:25:14.894957  691489 system_pods.go:61] "etcd-newest-cni-078760" [ab03adeb-629d-40cc-b5a7-612855165223] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0210 13:25:14.894978  691489 system_pods.go:61] "kube-apiserver-newest-cni-078760" [d6bb0517-d5ab-4839-8974-f7c6d58dad52] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0210 13:25:14.894993  691489 system_pods.go:61] "kube-controller-manager-newest-cni-078760" [960a3334-7167-4942-8f1c-5a03ea01e628] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0210 13:25:14.895003  691489 system_pods.go:61] "kube-proxy-kd8mx" [951cb4ab-6e99-4be5-87ee-9e9c8eb4c635] Running
	I0210 13:25:14.895012  691489 system_pods.go:61] "kube-scheduler-newest-cni-078760" [bb9270e8-85d5-460e-89b5-49f374c1775d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0210 13:25:14.895020  691489 system_pods.go:61] "metrics-server-f79f97bbb-m2m4m" [9505b23a-756e-405a-a279-9e5a64082f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 13:25:14.895031  691489 system_pods.go:61] "storage-provisioner" [027d0f58-173c-4c51-86c6-461f4393192c] Running
	I0210 13:25:14.895036  691489 system_pods.go:74] duration metric: took 3.36419ms to wait for pod list to return data ...
	I0210 13:25:14.895046  691489 default_sa.go:34] waiting for default service account to be created ...
	I0210 13:25:14.896970  691489 default_sa.go:45] found service account: "default"
	I0210 13:25:14.896991  691489 default_sa.go:55] duration metric: took 1.936863ms for default service account to be created ...
	I0210 13:25:14.897002  691489 kubeadm.go:582] duration metric: took 221.847464ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0210 13:25:14.897020  691489 node_conditions.go:102] verifying NodePressure condition ...
	I0210 13:25:14.898549  691489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 13:25:14.898572  691489 node_conditions.go:123] node cpu capacity is 2
	I0210 13:25:14.898582  691489 node_conditions.go:105] duration metric: took 1.55688ms to run NodePressure ...
	I0210 13:25:14.898599  691489 start.go:241] waiting for startup goroutines ...
	I0210 13:25:14.932116  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0210 13:25:14.932150  691489 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0210 13:25:14.934060  691489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 13:25:14.952546  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0210 13:25:14.952574  691489 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0210 13:25:15.029473  691489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 13:25:15.031105  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0210 13:25:15.031141  691489 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0210 13:25:15.056497  691489 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0210 13:25:15.056538  691489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0210 13:25:15.095190  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0210 13:25:15.095224  691489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0210 13:25:15.121346  691489 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0210 13:25:15.121374  691489 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0210 13:25:15.153148  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0210 13:25:15.153179  691489 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0210 13:25:15.216706  691489 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 13:25:15.216746  691489 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0210 13:25:15.241907  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0210 13:25:15.241943  691489 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0210 13:25:15.302673  691489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 13:25:15.365047  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0210 13:25:15.365100  691489 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0210 13:25:15.440460  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0210 13:25:15.440489  691489 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0210 13:25:15.518952  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 13:25:15.518987  691489 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0210 13:25:15.565860  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:15.565890  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:15.566253  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:15.566279  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:15.566278  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Closing plugin on server side
	I0210 13:25:15.566296  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:15.566308  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:15.566612  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:15.566656  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:15.576240  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:15.576264  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:15.576535  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:15.576557  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:15.576595  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Closing plugin on server side
	I0210 13:25:15.580109  691489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 13:25:16.740012  691489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.71049179s)
	I0210 13:25:16.740081  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:16.740093  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:16.740447  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:16.740469  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:16.740478  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:16.740487  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:16.740747  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:16.740797  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:16.740830  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Closing plugin on server side
	I0210 13:25:16.805424  691489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.502701364s)
	I0210 13:25:16.805480  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:16.805494  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:16.805796  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Closing plugin on server side
	I0210 13:25:16.805817  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:16.805851  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:16.805880  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:16.805893  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:16.806125  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:16.806141  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:16.806142  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Closing plugin on server side
	I0210 13:25:16.806153  691489 addons.go:479] Verifying addon metrics-server=true in "newest-cni-078760"
	I0210 13:25:17.452174  691489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.872018184s)
	I0210 13:25:17.452259  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:17.452280  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:17.452708  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:17.452733  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:17.452748  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:17.452742  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Closing plugin on server side
	I0210 13:25:17.452757  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:17.453057  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Closing plugin on server side
	I0210 13:25:17.453089  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:17.453098  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:17.455198  691489 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-078760 addons enable metrics-server
	
	I0210 13:25:17.456604  691489 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0210 13:25:17.458205  691489 addons.go:514] duration metric: took 2.782999976s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0210 13:25:17.458254  691489 start.go:246] waiting for cluster config update ...
	I0210 13:25:17.458273  691489 start.go:255] writing updated cluster config ...
	I0210 13:25:17.458614  691489 ssh_runner.go:195] Run: rm -f paused
	I0210 13:25:17.524434  691489 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0210 13:25:17.526201  691489 out.go:177] * Done! kubectl is now configured to use "newest-cni-078760" cluster and "default" namespace by default
	I0210 13:25:13.769744  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:15.770291  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:18.270374  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:20.270770  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:22.769900  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:24.770480  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:27.269398  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:29.270791  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:31.769785  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:34.269730  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:36.270751  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:38.770282  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:41.270569  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:43.769870  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:46.269860  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:48.269910  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:50.770287  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:53.270301  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:55.769898  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:57.770053  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:00.270852  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:02.769689  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:04.770190  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:06.770226  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:09.271157  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:11.770318  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:14.269317  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:16.270215  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:18.770402  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:21.269667  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:23.275443  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:25.770573  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:28.270716  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:30.271759  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:32.770603  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:35.269945  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:37.769930  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:39.783553  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:42.271101  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:44.774027  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:47.270211  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:49.771412  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:52.271199  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:52.767674  688914 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 13:26:52.767807  688914 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 13:26:52.769626  688914 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 13:26:52.769700  688914 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 13:26:52.769810  688914 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 13:26:52.769934  688914 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 13:26:52.770031  688914 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 13:26:52.770114  688914 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 13:26:52.771972  688914 out.go:235]   - Generating certificates and keys ...
	I0210 13:26:52.772065  688914 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 13:26:52.772157  688914 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 13:26:52.772272  688914 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 13:26:52.772338  688914 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 13:26:52.772402  688914 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 13:26:52.772464  688914 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 13:26:52.772523  688914 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 13:26:52.772581  688914 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 13:26:52.772660  688914 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 13:26:52.772734  688914 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 13:26:52.772770  688914 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 13:26:52.772822  688914 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 13:26:52.772867  688914 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 13:26:52.772917  688914 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 13:26:52.772974  688914 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 13:26:52.773022  688914 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 13:26:52.773151  688914 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 13:26:52.773258  688914 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 13:26:52.773305  688914 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 13:26:52.773386  688914 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 13:26:52.774698  688914 out.go:235]   - Booting up control plane ...
	I0210 13:26:52.774783  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 13:26:52.774853  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 13:26:52.774915  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 13:26:52.775002  688914 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 13:26:52.775179  688914 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 13:26:52.775244  688914 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 13:26:52.775340  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:26:52.775545  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:26:52.775613  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:26:52.775783  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:26:52.775841  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:26:52.776005  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:26:52.776090  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:26:52.776307  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:26:52.776424  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:26:52.776602  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:26:52.776616  688914 kubeadm.go:310] 
	I0210 13:26:52.776653  688914 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 13:26:52.776690  688914 kubeadm.go:310] 		timed out waiting for the condition
	I0210 13:26:52.776699  688914 kubeadm.go:310] 
	I0210 13:26:52.776733  688914 kubeadm.go:310] 	This error is likely caused by:
	I0210 13:26:52.776763  688914 kubeadm.go:310] 		- The kubelet is not running
	I0210 13:26:52.776850  688914 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 13:26:52.776856  688914 kubeadm.go:310] 
	I0210 13:26:52.776949  688914 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 13:26:52.776979  688914 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 13:26:52.777011  688914 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 13:26:52.777017  688914 kubeadm.go:310] 
	I0210 13:26:52.777134  688914 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 13:26:52.777239  688914 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 13:26:52.777252  688914 kubeadm.go:310] 
	I0210 13:26:52.777401  688914 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 13:26:52.777543  688914 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 13:26:52.777651  688914 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 13:26:52.777721  688914 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 13:26:52.777789  688914 kubeadm.go:310] 
	W0210 13:26:52.777852  688914 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0210 13:26:52.777903  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 13:26:54.770289  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:56.770506  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:58.074596  688914 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.296665584s)
	I0210 13:26:58.074683  688914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:26:58.091152  688914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:26:58.102648  688914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:26:58.102673  688914 kubeadm.go:157] found existing configuration files:
	
	I0210 13:26:58.102740  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:26:58.113654  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:26:58.113729  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:26:58.124863  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:26:58.135257  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:26:58.135321  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:26:58.145634  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:26:58.154591  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:26:58.154654  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:26:58.163835  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:26:58.172611  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:26:58.172679  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:26:58.182392  688914 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 13:26:58.251261  688914 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 13:26:58.251358  688914 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 13:26:58.383309  688914 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 13:26:58.383435  688914 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 13:26:58.383542  688914 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 13:26:58.550776  688914 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 13:26:58.552680  688914 out.go:235]   - Generating certificates and keys ...
	I0210 13:26:58.552793  688914 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 13:26:58.552881  688914 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 13:26:58.553007  688914 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 13:26:58.553091  688914 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 13:26:58.553226  688914 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 13:26:58.553329  688914 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 13:26:58.553420  688914 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 13:26:58.553525  688914 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 13:26:58.553642  688914 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 13:26:58.553774  688914 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 13:26:58.553837  688914 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 13:26:58.553918  688914 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 13:26:58.654826  688914 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 13:26:58.871525  688914 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 13:26:59.121959  688914 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 13:26:59.254004  688914 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 13:26:59.268822  688914 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 13:26:59.269202  688914 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 13:26:59.269279  688914 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 13:26:59.410011  688914 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 13:26:59.412184  688914 out.go:235]   - Booting up control plane ...
	I0210 13:26:59.412320  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 13:26:59.425128  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 13:26:59.426554  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 13:26:59.427605  688914 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 13:26:59.433353  688914 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 13:26:59.270125  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:01.270335  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:03.770196  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:06.270103  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:08.770078  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:11.269430  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:13.770250  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:16.269952  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:18.270261  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:20.270697  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:22.768944  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:24.770265  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:27.269151  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:29.270121  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:31.271007  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:33.769366  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:35.769901  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:39.435230  688914 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 13:27:39.435410  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:27:39.435648  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:27:38.270194  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:40.770209  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:44.436555  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:27:44.436828  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:27:42.770480  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:45.270561  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:47.770652  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:49.770343  689817 pod_ready.go:82] duration metric: took 4m0.005913971s for pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace to be "Ready" ...
	E0210 13:27:49.770375  689817 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0210 13:27:49.770383  689817 pod_ready.go:39] duration metric: took 4m9.41326084s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 13:27:49.770402  689817 api_server.go:52] waiting for apiserver process to appear ...
	I0210 13:27:49.770454  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:27:49.770518  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:27:49.817157  689817 cri.go:89] found id: "d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a"
	I0210 13:27:49.817183  689817 cri.go:89] found id: ""
	I0210 13:27:49.817192  689817 logs.go:282] 1 containers: [d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a]
	I0210 13:27:49.817252  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:49.821670  689817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:27:49.821737  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:27:49.857058  689817 cri.go:89] found id: "92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9"
	I0210 13:27:49.857087  689817 cri.go:89] found id: ""
	I0210 13:27:49.857096  689817 logs.go:282] 1 containers: [92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9]
	I0210 13:27:49.857182  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:49.861432  689817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:27:49.861505  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:27:49.897872  689817 cri.go:89] found id: "c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844"
	I0210 13:27:49.897903  689817 cri.go:89] found id: ""
	I0210 13:27:49.897914  689817 logs.go:282] 1 containers: [c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844]
	I0210 13:27:49.897982  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:49.902266  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:27:49.902339  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:27:49.944231  689817 cri.go:89] found id: "e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31"
	I0210 13:27:49.944261  689817 cri.go:89] found id: ""
	I0210 13:27:49.944272  689817 logs.go:282] 1 containers: [e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31]
	I0210 13:27:49.944336  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:49.948503  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:27:49.948579  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:27:49.990016  689817 cri.go:89] found id: "e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225"
	I0210 13:27:49.990040  689817 cri.go:89] found id: ""
	I0210 13:27:49.990048  689817 logs.go:282] 1 containers: [e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225]
	I0210 13:27:49.990106  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:49.994001  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:27:49.994060  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:27:50.027512  689817 cri.go:89] found id: "ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa"
	I0210 13:27:50.027538  689817 cri.go:89] found id: ""
	I0210 13:27:50.027549  689817 logs.go:282] 1 containers: [ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa]
	I0210 13:27:50.027614  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:50.031763  689817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:27:50.031823  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:27:50.066416  689817 cri.go:89] found id: ""
	I0210 13:27:50.066448  689817 logs.go:282] 0 containers: []
	W0210 13:27:50.066459  689817 logs.go:284] No container was found matching "kindnet"
	I0210 13:27:50.066467  689817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:27:50.066535  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:27:50.101054  689817 cri.go:89] found id: "bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0"
	I0210 13:27:50.101076  689817 cri.go:89] found id: ""
	I0210 13:27:50.101084  689817 logs.go:282] 1 containers: [bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0]
	I0210 13:27:50.101151  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:50.104987  689817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0210 13:27:50.105056  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 13:27:50.142580  689817 cri.go:89] found id: "e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863"
	I0210 13:27:50.142608  689817 cri.go:89] found id: "9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a"
	I0210 13:27:50.142614  689817 cri.go:89] found id: ""
	I0210 13:27:50.142624  689817 logs.go:282] 2 containers: [e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863 9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a]
	I0210 13:27:50.142692  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:50.146540  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:50.150056  689817 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:27:50.150079  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 13:27:50.311229  689817 logs.go:123] Gathering logs for kube-apiserver [d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a] ...
	I0210 13:27:50.311279  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a"
	I0210 13:27:50.366011  689817 logs.go:123] Gathering logs for etcd [92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9] ...
	I0210 13:27:50.366046  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9"
	I0210 13:27:50.412490  689817 logs.go:123] Gathering logs for kube-controller-manager [ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa] ...
	I0210 13:27:50.412523  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa"
	I0210 13:27:50.476890  689817 logs.go:123] Gathering logs for kubelet ...
	I0210 13:27:50.476940  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:27:50.571913  689817 logs.go:123] Gathering logs for coredns [c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844] ...
	I0210 13:27:50.571960  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844"
	I0210 13:27:50.606241  689817 logs.go:123] Gathering logs for kube-proxy [e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225] ...
	I0210 13:27:50.606284  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225"
	I0210 13:27:50.640859  689817 logs.go:123] Gathering logs for storage-provisioner [e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863] ...
	I0210 13:27:50.640895  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863"
	I0210 13:27:50.675943  689817 logs.go:123] Gathering logs for storage-provisioner [9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a] ...
	I0210 13:27:50.675979  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a"
	I0210 13:27:50.708397  689817 logs.go:123] Gathering logs for container status ...
	I0210 13:27:50.708447  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:27:50.759969  689817 logs.go:123] Gathering logs for dmesg ...
	I0210 13:27:50.760002  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:27:50.773795  689817 logs.go:123] Gathering logs for kube-scheduler [e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31] ...
	I0210 13:27:50.773827  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31"
	I0210 13:27:50.808393  689817 logs.go:123] Gathering logs for kubernetes-dashboard [bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0] ...
	I0210 13:27:50.808426  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0"
	I0210 13:27:50.841955  689817 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:27:50.841988  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:27:54.437160  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:27:54.437400  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:27:53.852846  689817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:27:53.869585  689817 api_server.go:72] duration metric: took 4m20.830334356s to wait for apiserver process to appear ...
	I0210 13:27:53.869618  689817 api_server.go:88] waiting for apiserver healthz status ...
	I0210 13:27:53.869665  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:27:53.869721  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:27:53.907655  689817 cri.go:89] found id: "d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a"
	I0210 13:27:53.907686  689817 cri.go:89] found id: ""
	I0210 13:27:53.907695  689817 logs.go:282] 1 containers: [d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a]
	I0210 13:27:53.907758  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:53.911810  689817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:27:53.911893  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:27:53.952378  689817 cri.go:89] found id: "92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9"
	I0210 13:27:53.952414  689817 cri.go:89] found id: ""
	I0210 13:27:53.952424  689817 logs.go:282] 1 containers: [92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9]
	I0210 13:27:53.952481  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:53.956365  689817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:27:53.956441  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:27:53.991382  689817 cri.go:89] found id: "c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844"
	I0210 13:27:53.991419  689817 cri.go:89] found id: ""
	I0210 13:27:53.991428  689817 logs.go:282] 1 containers: [c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844]
	I0210 13:27:53.991485  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:53.995300  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:27:53.995386  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:27:54.029032  689817 cri.go:89] found id: "e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31"
	I0210 13:27:54.029061  689817 cri.go:89] found id: ""
	I0210 13:27:54.029071  689817 logs.go:282] 1 containers: [e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31]
	I0210 13:27:54.029148  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:54.032926  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:27:54.032978  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:27:54.070279  689817 cri.go:89] found id: "e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225"
	I0210 13:27:54.070310  689817 cri.go:89] found id: ""
	I0210 13:27:54.070321  689817 logs.go:282] 1 containers: [e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225]
	I0210 13:27:54.070380  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:54.074168  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:27:54.074254  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:27:54.108632  689817 cri.go:89] found id: "ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa"
	I0210 13:27:54.108665  689817 cri.go:89] found id: ""
	I0210 13:27:54.108676  689817 logs.go:282] 1 containers: [ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa]
	I0210 13:27:54.108752  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:54.112693  689817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:27:54.112777  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:27:54.147138  689817 cri.go:89] found id: ""
	I0210 13:27:54.147170  689817 logs.go:282] 0 containers: []
	W0210 13:27:54.147178  689817 logs.go:284] No container was found matching "kindnet"
	I0210 13:27:54.147185  689817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:27:54.147247  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:27:54.183531  689817 cri.go:89] found id: "bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0"
	I0210 13:27:54.183555  689817 cri.go:89] found id: ""
	I0210 13:27:54.183563  689817 logs.go:282] 1 containers: [bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0]
	I0210 13:27:54.183620  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:54.187900  689817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0210 13:27:54.187970  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 13:27:54.224779  689817 cri.go:89] found id: "e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863"
	I0210 13:27:54.224803  689817 cri.go:89] found id: "9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a"
	I0210 13:27:54.224807  689817 cri.go:89] found id: ""
	I0210 13:27:54.224815  689817 logs.go:282] 2 containers: [e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863 9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a]
	I0210 13:27:54.224870  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:54.229251  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:54.232955  689817 logs.go:123] Gathering logs for coredns [c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844] ...
	I0210 13:27:54.232973  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844"
	I0210 13:27:54.266570  689817 logs.go:123] Gathering logs for kube-controller-manager [ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa] ...
	I0210 13:27:54.266604  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa"
	I0210 13:27:54.343214  689817 logs.go:123] Gathering logs for storage-provisioner [e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863] ...
	I0210 13:27:54.343252  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863"
	I0210 13:27:54.376776  689817 logs.go:123] Gathering logs for kubernetes-dashboard [bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0] ...
	I0210 13:27:54.376808  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0"
	I0210 13:27:54.410609  689817 logs.go:123] Gathering logs for storage-provisioner [9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a] ...
	I0210 13:27:54.410639  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a"
	I0210 13:27:54.443452  689817 logs.go:123] Gathering logs for kubelet ...
	I0210 13:27:54.443478  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:27:54.527929  689817 logs.go:123] Gathering logs for dmesg ...
	I0210 13:27:54.527979  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:27:54.542227  689817 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:27:54.542268  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 13:27:54.641377  689817 logs.go:123] Gathering logs for etcd [92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9] ...
	I0210 13:27:54.641418  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9"
	I0210 13:27:54.688223  689817 logs.go:123] Gathering logs for kube-proxy [e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225] ...
	I0210 13:27:54.688271  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225"
	I0210 13:27:54.725502  689817 logs.go:123] Gathering logs for kube-apiserver [d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a] ...
	I0210 13:27:54.725539  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a"
	I0210 13:27:54.765130  689817 logs.go:123] Gathering logs for kube-scheduler [e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31] ...
	I0210 13:27:54.765167  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31"
	I0210 13:27:54.800179  689817 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:27:54.800207  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:27:55.252259  689817 logs.go:123] Gathering logs for container status ...
	I0210 13:27:55.252300  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:27:57.789687  689817 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8444/healthz ...
	I0210 13:27:57.794618  689817 api_server.go:279] https://192.168.50.61:8444/healthz returned 200:
	ok
	I0210 13:27:57.795699  689817 api_server.go:141] control plane version: v1.32.1
	I0210 13:27:57.795724  689817 api_server.go:131] duration metric: took 3.926098165s to wait for apiserver health ...
	I0210 13:27:57.795735  689817 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 13:27:57.795772  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:27:57.795820  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:27:57.829148  689817 cri.go:89] found id: "d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a"
	I0210 13:27:57.829179  689817 cri.go:89] found id: ""
	I0210 13:27:57.829190  689817 logs.go:282] 1 containers: [d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a]
	I0210 13:27:57.829265  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:57.833209  689817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:27:57.833272  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:27:57.865761  689817 cri.go:89] found id: "92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9"
	I0210 13:27:57.865789  689817 cri.go:89] found id: ""
	I0210 13:27:57.865799  689817 logs.go:282] 1 containers: [92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9]
	I0210 13:27:57.865866  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:57.869409  689817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:27:57.869480  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:27:57.905847  689817 cri.go:89] found id: "c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844"
	I0210 13:27:57.905875  689817 cri.go:89] found id: ""
	I0210 13:27:57.905886  689817 logs.go:282] 1 containers: [c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844]
	I0210 13:27:57.905956  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:57.911821  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:27:57.911896  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:27:57.950779  689817 cri.go:89] found id: "e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31"
	I0210 13:27:57.950803  689817 cri.go:89] found id: ""
	I0210 13:27:57.950810  689817 logs.go:282] 1 containers: [e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31]
	I0210 13:27:57.950880  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:57.954573  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:27:57.954651  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:27:57.991678  689817 cri.go:89] found id: "e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225"
	I0210 13:27:57.991705  689817 cri.go:89] found id: ""
	I0210 13:27:57.991717  689817 logs.go:282] 1 containers: [e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225]
	I0210 13:27:57.991772  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:57.995971  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:27:57.996063  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:27:58.029073  689817 cri.go:89] found id: "ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa"
	I0210 13:27:58.029098  689817 cri.go:89] found id: ""
	I0210 13:27:58.029144  689817 logs.go:282] 1 containers: [ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa]
	I0210 13:27:58.029212  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:58.034012  689817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:27:58.034073  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:27:58.071316  689817 cri.go:89] found id: ""
	I0210 13:27:58.071346  689817 logs.go:282] 0 containers: []
	W0210 13:27:58.071358  689817 logs.go:284] No container was found matching "kindnet"
	I0210 13:27:58.071367  689817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:27:58.071438  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:27:58.105280  689817 cri.go:89] found id: "bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0"
	I0210 13:27:58.105308  689817 cri.go:89] found id: ""
	I0210 13:27:58.105319  689817 logs.go:282] 1 containers: [bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0]
	I0210 13:27:58.105390  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:58.109074  689817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0210 13:27:58.109169  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 13:27:58.141391  689817 cri.go:89] found id: "e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863"
	I0210 13:27:58.141415  689817 cri.go:89] found id: "9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a"
	I0210 13:27:58.141422  689817 cri.go:89] found id: ""
	I0210 13:27:58.141432  689817 logs.go:282] 2 containers: [e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863 9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a]
	I0210 13:27:58.141490  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:58.144977  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:58.148249  689817 logs.go:123] Gathering logs for kube-controller-manager [ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa] ...
	I0210 13:27:58.148272  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa"
	I0210 13:27:58.201328  689817 logs.go:123] Gathering logs for kubelet ...
	I0210 13:27:58.201360  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:27:58.296953  689817 logs.go:123] Gathering logs for dmesg ...
	I0210 13:27:58.297010  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:27:58.311276  689817 logs.go:123] Gathering logs for etcd [92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9] ...
	I0210 13:27:58.311312  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9"
	I0210 13:27:58.361415  689817 logs.go:123] Gathering logs for coredns [c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844] ...
	I0210 13:27:58.361452  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844"
	I0210 13:27:58.396072  689817 logs.go:123] Gathering logs for kube-apiserver [d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a] ...
	I0210 13:27:58.396109  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a"
	I0210 13:27:58.448027  689817 logs.go:123] Gathering logs for kube-scheduler [e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31] ...
	I0210 13:27:58.448064  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31"
	I0210 13:27:58.481535  689817 logs.go:123] Gathering logs for kube-proxy [e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225] ...
	I0210 13:27:58.481573  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225"
	I0210 13:27:58.514411  689817 logs.go:123] Gathering logs for kubernetes-dashboard [bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0] ...
	I0210 13:27:58.514445  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0"
	I0210 13:27:58.549570  689817 logs.go:123] Gathering logs for storage-provisioner [9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a] ...
	I0210 13:27:58.549603  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a"
	I0210 13:27:58.592297  689817 logs.go:123] Gathering logs for container status ...
	I0210 13:27:58.592330  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:27:58.631626  689817 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:27:58.631667  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 13:27:58.727480  689817 logs.go:123] Gathering logs for storage-provisioner [e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863] ...
	I0210 13:27:58.727519  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863"
	I0210 13:27:58.760031  689817 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:27:58.760069  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:28:01.664367  689817 system_pods.go:59] 8 kube-system pods found
	I0210 13:28:01.664422  689817 system_pods.go:61] "coredns-668d6bf9bc-fj2zq" [583359d8-8ada-4747-8682-6176db3f798a] Running
	I0210 13:28:01.664431  689817 system_pods.go:61] "etcd-default-k8s-diff-port-957542" [15bd93be-c696-42f6-9406-abe5d824a9d0] Running
	I0210 13:28:01.664436  689817 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-957542" [475365bf-2504-46d7-a068-5f5e3a9c773e] Running
	I0210 13:28:01.664442  689817 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-957542" [21fcb133-d0ed-4608-8d25-3719f15d0aaa] Running
	I0210 13:28:01.664446  689817 system_pods.go:61] "kube-proxy-8th94" [1e1a48fd-55a5-48e4-84dc-638f9d650e12] Running
	I0210 13:28:01.664451  689817 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-957542" [1bbe3544-9217-4b50-9903-8b0edf49f996] Running
	I0210 13:28:01.664459  689817 system_pods.go:61] "metrics-server-f79f97bbb-sg6xj" [4fd14781-7917-44e7-8358-2ae86a7bac81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 13:28:01.664465  689817 system_pods.go:61] "storage-provisioner" [30e8603f-89cf-4919-9bf4-bcece8c32934] Running
	I0210 13:28:01.664478  689817 system_pods.go:74] duration metric: took 3.868731638s to wait for pod list to return data ...
	I0210 13:28:01.664492  689817 default_sa.go:34] waiting for default service account to be created ...
	I0210 13:28:01.666845  689817 default_sa.go:45] found service account: "default"
	I0210 13:28:01.666865  689817 default_sa.go:55] duration metric: took 2.365764ms for default service account to be created ...
	I0210 13:28:01.666874  689817 system_pods.go:116] waiting for k8s-apps to be running ...
	I0210 13:28:01.669411  689817 system_pods.go:86] 8 kube-system pods found
	I0210 13:28:01.669440  689817 system_pods.go:89] "coredns-668d6bf9bc-fj2zq" [583359d8-8ada-4747-8682-6176db3f798a] Running
	I0210 13:28:01.669446  689817 system_pods.go:89] "etcd-default-k8s-diff-port-957542" [15bd93be-c696-42f6-9406-abe5d824a9d0] Running
	I0210 13:28:01.669451  689817 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-957542" [475365bf-2504-46d7-a068-5f5e3a9c773e] Running
	I0210 13:28:01.669455  689817 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-957542" [21fcb133-d0ed-4608-8d25-3719f15d0aaa] Running
	I0210 13:28:01.669459  689817 system_pods.go:89] "kube-proxy-8th94" [1e1a48fd-55a5-48e4-84dc-638f9d650e12] Running
	I0210 13:28:01.669463  689817 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-957542" [1bbe3544-9217-4b50-9903-8b0edf49f996] Running
	I0210 13:28:01.669469  689817 system_pods.go:89] "metrics-server-f79f97bbb-sg6xj" [4fd14781-7917-44e7-8358-2ae86a7bac81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 13:28:01.669474  689817 system_pods.go:89] "storage-provisioner" [30e8603f-89cf-4919-9bf4-bcece8c32934] Running
	I0210 13:28:01.669482  689817 system_pods.go:126] duration metric: took 2.601853ms to wait for k8s-apps to be running ...
	I0210 13:28:01.669489  689817 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 13:28:01.669552  689817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:28:01.684641  689817 system_svc.go:56] duration metric: took 15.145438ms WaitForService to wait for kubelet
	I0210 13:28:01.684677  689817 kubeadm.go:582] duration metric: took 4m28.645432042s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 13:28:01.684724  689817 node_conditions.go:102] verifying NodePressure condition ...
	I0210 13:28:01.687051  689817 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 13:28:01.687081  689817 node_conditions.go:123] node cpu capacity is 2
	I0210 13:28:01.687115  689817 node_conditions.go:105] duration metric: took 2.383739ms to run NodePressure ...
	I0210 13:28:01.687149  689817 start.go:241] waiting for startup goroutines ...
	I0210 13:28:01.687161  689817 start.go:246] waiting for cluster config update ...
	I0210 13:28:01.687172  689817 start.go:255] writing updated cluster config ...
	I0210 13:28:01.687476  689817 ssh_runner.go:195] Run: rm -f paused
	I0210 13:28:01.739316  689817 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0210 13:28:01.741286  689817 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-957542" cluster and "default" namespace by default
	I0210 13:28:14.437678  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:28:14.437931  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:28:54.436979  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:28:54.437271  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:28:54.437281  688914 kubeadm.go:310] 
	I0210 13:28:54.437319  688914 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 13:28:54.437355  688914 kubeadm.go:310] 		timed out waiting for the condition
	I0210 13:28:54.437361  688914 kubeadm.go:310] 
	I0210 13:28:54.437390  688914 kubeadm.go:310] 	This error is likely caused by:
	I0210 13:28:54.437468  688914 kubeadm.go:310] 		- The kubelet is not running
	I0210 13:28:54.437614  688914 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 13:28:54.437628  688914 kubeadm.go:310] 
	I0210 13:28:54.437762  688914 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 13:28:54.437806  688914 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 13:28:54.437850  688914 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 13:28:54.437863  688914 kubeadm.go:310] 
	I0210 13:28:54.437986  688914 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 13:28:54.438064  688914 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 13:28:54.438084  688914 kubeadm.go:310] 
	I0210 13:28:54.438245  688914 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 13:28:54.438388  688914 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 13:28:54.438510  688914 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 13:28:54.438608  688914 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 13:28:54.438622  688914 kubeadm.go:310] 
	I0210 13:28:54.439017  688914 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 13:28:54.439094  688914 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 13:28:54.439183  688914 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 13:28:54.439220  688914 kubeadm.go:394] duration metric: took 8m1.096783715s to StartCluster
	I0210 13:28:54.439356  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:28:54.439446  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:28:54.481711  688914 cri.go:89] found id: ""
	I0210 13:28:54.481745  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.481753  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:28:54.481759  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:28:54.481826  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:28:54.515485  688914 cri.go:89] found id: ""
	I0210 13:28:54.515513  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.515521  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:28:54.515528  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:28:54.515585  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:28:54.565719  688914 cri.go:89] found id: ""
	I0210 13:28:54.565767  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.565779  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:28:54.565788  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:28:54.565864  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:28:54.597764  688914 cri.go:89] found id: ""
	I0210 13:28:54.597806  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.597814  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:28:54.597821  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:28:54.597888  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:28:54.631935  688914 cri.go:89] found id: ""
	I0210 13:28:54.631965  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.631975  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:28:54.631982  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:28:54.632052  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:28:54.664095  688914 cri.go:89] found id: ""
	I0210 13:28:54.664135  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.664147  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:28:54.664154  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:28:54.664213  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:28:54.695397  688914 cri.go:89] found id: ""
	I0210 13:28:54.695433  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.695445  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:28:54.695454  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:28:54.695522  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:28:54.732080  688914 cri.go:89] found id: ""
	I0210 13:28:54.732115  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.732127  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:28:54.732150  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:28:54.732163  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:28:54.838309  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:28:54.838352  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:28:54.876415  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:28:54.876444  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:28:54.925312  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:28:54.925353  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:28:54.938075  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:28:54.938108  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:28:55.007575  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0210 13:28:55.007606  688914 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0210 13:28:55.007664  688914 out.go:270] * 
	W0210 13:28:55.007737  688914 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 13:28:55.007760  688914 out.go:270] * 
	W0210 13:28:55.008646  688914 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 13:28:55.012559  688914 out.go:201] 
	W0210 13:28:55.013936  688914 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 13:28:55.013983  688914 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0210 13:28:55.014019  688914 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0210 13:28:55.015512  688914 out.go:201] 
	
	
	==> CRI-O <==
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.609340403Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739194677609316363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23a595ea-215b-4d94-bbbc-8ea3805e3653 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.609886426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd054c67-25d9-402f-90cc-0cd6520b98ed name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.609933032Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd054c67-25d9-402f-90cc-0cd6520b98ed name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.609974242Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fd054c67-25d9-402f-90cc-0cd6520b98ed name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.639123013Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7768b940-8e64-474a-8f38-dd5b2b938c6b name=/runtime.v1.RuntimeService/Version
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.639211483Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7768b940-8e64-474a-8f38-dd5b2b938c6b name=/runtime.v1.RuntimeService/Version
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.640310663Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1c2da34-b867-4ede-b3fa-1905ce2d14b7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.640791048Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739194677640760877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1c2da34-b867-4ede-b3fa-1905ce2d14b7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.641542795Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7bf1d360-a9fe-49a5-a61d-23cab19e43a7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.641664171Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7bf1d360-a9fe-49a5-a61d-23cab19e43a7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.641705648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7bf1d360-a9fe-49a5-a61d-23cab19e43a7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.673016161Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee229a36-9688-4a39-a3a0-c46817edbb05 name=/runtime.v1.RuntimeService/Version
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.673130068Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee229a36-9688-4a39-a3a0-c46817edbb05 name=/runtime.v1.RuntimeService/Version
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.674469711Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15594716-2697-485e-b4f1-c8f85ecd8d51 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.674919094Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739194677674893950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15594716-2697-485e-b4f1-c8f85ecd8d51 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.675455982Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0cc34f4-bfb2-4c0f-bd2c-28dac2c2b26a name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.675524617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0cc34f4-bfb2-4c0f-bd2c-28dac2c2b26a name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.675556277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f0cc34f4-bfb2-4c0f-bd2c-28dac2c2b26a name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.706110239Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a3b3565-1213-4dcc-8d18-81a4e05ab0f4 name=/runtime.v1.RuntimeService/Version
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.706213550Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a3b3565-1213-4dcc-8d18-81a4e05ab0f4 name=/runtime.v1.RuntimeService/Version
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.707541664Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ffc9fd2-49ba-40c0-a74e-ff89e8acefac name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.707995106Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739194677707966400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ffc9fd2-49ba-40c0-a74e-ff89e8acefac name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.708475986Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0d9dfd2-2e23-4ea1-a9f6-17cb89ac580c name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.708525586Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0d9dfd2-2e23-4ea1-a9f6-17cb89ac580c name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:37:57 old-k8s-version-745712 crio[634]: time="2025-02-10 13:37:57.708563633Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f0d9dfd2-2e23-4ea1-a9f6-17cb89ac580c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb10 13:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057667] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039973] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.114070] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.167757] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.632628] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.042916] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.063154] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064261] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.151765] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.139010] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.215149] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +6.104183] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.063040] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.778959] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[Feb10 13:21] kauditd_printk_skb: 46 callbacks suppressed
	[Feb10 13:24] systemd-fstab-generator[5077]: Ignoring "noauto" option for root device
	[Feb10 13:26] systemd-fstab-generator[5356]: Ignoring "noauto" option for root device
	[  +0.069372] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:37:57 up 17 min,  0 users,  load average: 0.07, 0.02, 0.02
	Linux old-k8s-version-745712 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 10 13:37:55 old-k8s-version-745712 kubelet[6529]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Feb 10 13:37:55 old-k8s-version-745712 kubelet[6529]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Feb 10 13:37:55 old-k8s-version-745712 kubelet[6529]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Feb 10 13:37:55 old-k8s-version-745712 kubelet[6529]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0006426f0)
	Feb 10 13:37:55 old-k8s-version-745712 kubelet[6529]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Feb 10 13:37:55 old-k8s-version-745712 kubelet[6529]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000949ef0, 0x4f0ac20, 0xc0003f53b0, 0x1, 0xc00009e0c0)
	Feb 10 13:37:55 old-k8s-version-745712 kubelet[6529]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Feb 10 13:37:55 old-k8s-version-745712 kubelet[6529]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0008c2c40, 0xc00009e0c0)
	Feb 10 13:37:55 old-k8s-version-745712 kubelet[6529]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Feb 10 13:37:55 old-k8s-version-745712 kubelet[6529]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Feb 10 13:37:55 old-k8s-version-745712 kubelet[6529]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Feb 10 13:37:55 old-k8s-version-745712 kubelet[6529]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0008ffa10, 0xc000938ae0)
	Feb 10 13:37:55 old-k8s-version-745712 kubelet[6529]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Feb 10 13:37:55 old-k8s-version-745712 kubelet[6529]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Feb 10 13:37:55 old-k8s-version-745712 kubelet[6529]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Feb 10 13:37:55 old-k8s-version-745712 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 10 13:37:55 old-k8s-version-745712 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 10 13:37:55 old-k8s-version-745712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Feb 10 13:37:55 old-k8s-version-745712 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 10 13:37:55 old-k8s-version-745712 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 10 13:37:55 old-k8s-version-745712 kubelet[6538]: I0210 13:37:55.759161    6538 server.go:416] Version: v1.20.0
	Feb 10 13:37:55 old-k8s-version-745712 kubelet[6538]: I0210 13:37:55.759396    6538 server.go:837] Client rotation is on, will bootstrap in background
	Feb 10 13:37:55 old-k8s-version-745712 kubelet[6538]: I0210 13:37:55.761348    6538 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 10 13:37:55 old-k8s-version-745712 kubelet[6538]: W0210 13:37:55.762254    6538 manager.go:159] Cannot detect current cgroup on cgroup v2
	Feb 10 13:37:55 old-k8s-version-745712 kubelet[6538]: I0210 13:37:55.762286    6538 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-745712 -n old-k8s-version-745712
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-745712 -n old-k8s-version-745712: exit status 2 (234.088449ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-745712" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (383.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:38:17.603951  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:38:52.928642  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:39:44.143767  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:39:55.622914  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/bridge-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:40:46.485375  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:41:15.234592  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/default-k8s-diff-port-957542/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:41:33.990340  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/no-preload-112306/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:41:36.421323  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:41:53.359700  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:42:36.338373  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:42:36.776721  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:42:57.056119  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/no-preload-112306/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:43:17.603090  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:43:49.561571  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 13:43:52.929157  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-745712 -n old-k8s-version-745712
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-745712 -n old-k8s-version-745712: exit status 2 (241.484769ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-745712" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-745712 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-745712 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.008µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-745712 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-745712 -n old-k8s-version-745712
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-745712 -n old-k8s-version-745712: exit status 2 (226.139382ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-745712 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | no-preload-112306 image list                           | no-preload-112306            | jenkins | v1.35.0 | 10 Feb 25 13:23 UTC | 10 Feb 25 13:23 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-112306                                   | no-preload-112306            | jenkins | v1.35.0 | 10 Feb 25 13:23 UTC | 10 Feb 25 13:23 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-112306                                   | no-preload-112306            | jenkins | v1.35.0 | 10 Feb 25 13:23 UTC | 10 Feb 25 13:23 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-112306                                   | no-preload-112306            | jenkins | v1.35.0 | 10 Feb 25 13:23 UTC | 10 Feb 25 13:23 UTC |
	| delete  | -p no-preload-112306                                   | no-preload-112306            | jenkins | v1.35.0 | 10 Feb 25 13:23 UTC | 10 Feb 25 13:23 UTC |
	| start   | -p newest-cni-078760 --memory=2200 --alsologtostderr   | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:23 UTC | 10 Feb 25 13:24 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | embed-certs-396582 image list                          | embed-certs-396582           | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-396582                                  | embed-certs-396582           | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-396582                                  | embed-certs-396582           | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-396582                                  | embed-certs-396582           | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	| delete  | -p embed-certs-396582                                  | embed-certs-396582           | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	| addons  | enable metrics-server -p newest-cni-078760             | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-078760                                   | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-078760                  | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-078760 --memory=2200 --alsologtostderr   | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:24 UTC | 10 Feb 25 13:25 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-078760 image list                           | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:25 UTC | 10 Feb 25 13:25 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-078760                                   | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:25 UTC | 10 Feb 25 13:25 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-078760                                   | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:25 UTC | 10 Feb 25 13:25 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-078760                                   | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:25 UTC | 10 Feb 25 13:25 UTC |
	| delete  | -p newest-cni-078760                                   | newest-cni-078760            | jenkins | v1.35.0 | 10 Feb 25 13:25 UTC | 10 Feb 25 13:25 UTC |
	| image   | default-k8s-diff-port-957542                           | default-k8s-diff-port-957542 | jenkins | v1.35.0 | 10 Feb 25 13:28 UTC | 10 Feb 25 13:28 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-957542 | jenkins | v1.35.0 | 10 Feb 25 13:28 UTC | 10 Feb 25 13:28 UTC |
	|         | default-k8s-diff-port-957542                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-957542 | jenkins | v1.35.0 | 10 Feb 25 13:28 UTC | 10 Feb 25 13:28 UTC |
	|         | default-k8s-diff-port-957542                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-957542 | jenkins | v1.35.0 | 10 Feb 25 13:28 UTC | 10 Feb 25 13:28 UTC |
	|         | default-k8s-diff-port-957542                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-957542 | jenkins | v1.35.0 | 10 Feb 25 13:28 UTC | 10 Feb 25 13:28 UTC |
	|         | default-k8s-diff-port-957542                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 13:24:41
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 13:24:41.261359  691489 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:24:41.261536  691489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:24:41.261547  691489 out.go:358] Setting ErrFile to fd 2...
	I0210 13:24:41.261554  691489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:24:41.261746  691489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
	I0210 13:24:41.262302  691489 out.go:352] Setting JSON to false
	I0210 13:24:41.263380  691489 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":18431,"bootTime":1739175450,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 13:24:41.263451  691489 start.go:139] virtualization: kvm guest
	I0210 13:24:41.265793  691489 out.go:177] * [newest-cni-078760] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 13:24:41.267418  691489 notify.go:220] Checking for updates...
	I0210 13:24:41.267458  691489 out.go:177]   - MINIKUBE_LOCATION=20383
	I0210 13:24:41.268698  691489 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 13:24:41.270028  691489 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:24:41.271343  691489 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	I0210 13:24:41.272529  691489 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 13:24:41.273658  691489 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 13:24:41.275235  691489 config.go:182] Loaded profile config "newest-cni-078760": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:24:41.275676  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:24:41.275733  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:24:41.291098  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34559
	I0210 13:24:41.291639  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:24:41.292262  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:24:41.292292  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:24:41.292606  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:24:41.292771  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:24:41.292989  691489 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 13:24:41.293438  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:24:41.293515  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:24:41.308113  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38677
	I0210 13:24:41.308493  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:24:41.308908  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:24:41.308925  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:24:41.309289  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:24:41.309516  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:24:41.345364  691489 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 13:24:41.346519  691489 start.go:297] selected driver: kvm2
	I0210 13:24:41.346533  691489 start.go:901] validating driver "kvm2" against &{Name:newest-cni-078760 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-078760 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Net
work: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:24:41.346634  691489 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 13:24:41.347359  691489 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:24:41.347444  691489 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20383-625153/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 13:24:41.361853  691489 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 13:24:41.362275  691489 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0210 13:24:41.362308  691489 cni.go:84] Creating CNI manager for ""
	I0210 13:24:41.362373  691489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:24:41.362421  691489 start.go:340] cluster config:
	{Name:newest-cni-078760 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-078760 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:24:41.362555  691489 iso.go:125] acquiring lock: {Name:mk013d189757e85c0699014f5ef29205d9d4927f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:24:41.365015  691489 out.go:177] * Starting "newest-cni-078760" primary control-plane node in "newest-cni-078760" cluster
	I0210 13:24:41.366217  691489 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 13:24:41.366274  691489 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0210 13:24:41.366291  691489 cache.go:56] Caching tarball of preloaded images
	I0210 13:24:41.366377  691489 preload.go:172] Found /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 13:24:41.366391  691489 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0210 13:24:41.366538  691489 profile.go:143] Saving config to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/config.json ...
	I0210 13:24:41.366777  691489 start.go:360] acquireMachinesLock for newest-cni-078760: {Name:mk28e87da66de739a4c7c70d1fb5afc4ce31a4d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 13:24:41.366839  691489 start.go:364] duration metric: took 35.147µs to acquireMachinesLock for "newest-cni-078760"
	I0210 13:24:41.366859  691489 start.go:96] Skipping create...Using existing machine configuration
	I0210 13:24:41.366868  691489 fix.go:54] fixHost starting: 
	I0210 13:24:41.367244  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:24:41.367288  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:24:41.381304  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38575
	I0210 13:24:41.381768  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:24:41.382361  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:24:41.382386  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:24:41.382722  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:24:41.382913  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:24:41.383081  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetState
	I0210 13:24:41.385267  691489 fix.go:112] recreateIfNeeded on newest-cni-078760: state=Stopped err=<nil>
	I0210 13:24:41.385305  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	W0210 13:24:41.385473  691489 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 13:24:41.387457  691489 out.go:177] * Restarting existing kvm2 VM for "newest-cni-078760" ...
	I0210 13:24:39.769831  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:41.770142  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:40.661417  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:40.673492  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:40.673565  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:40.704651  688914 cri.go:89] found id: ""
	I0210 13:24:40.704682  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.704691  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:40.704698  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:40.704757  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:40.738312  688914 cri.go:89] found id: ""
	I0210 13:24:40.738340  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.738348  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:40.738355  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:40.738427  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:40.770358  688914 cri.go:89] found id: ""
	I0210 13:24:40.770392  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.770404  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:40.770413  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:40.770483  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:40.806743  688914 cri.go:89] found id: ""
	I0210 13:24:40.806777  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.806789  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:40.806797  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:40.806856  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:40.838580  688914 cri.go:89] found id: ""
	I0210 13:24:40.838614  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.838626  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:40.838643  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:40.838715  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:40.869410  688914 cri.go:89] found id: ""
	I0210 13:24:40.869441  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.869449  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:40.869456  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:40.869520  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:40.903978  688914 cri.go:89] found id: ""
	I0210 13:24:40.904005  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.904014  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:40.904019  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:40.904086  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:40.937376  688914 cri.go:89] found id: ""
	I0210 13:24:40.937408  688914 logs.go:282] 0 containers: []
	W0210 13:24:40.937416  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:40.937426  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:40.937444  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:40.987586  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:40.987628  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:41.000596  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:41.000625  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:41.075352  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:41.075376  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:41.075396  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:41.155409  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:41.155441  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:43.696222  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:43.709019  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:43.709115  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:43.741277  688914 cri.go:89] found id: ""
	I0210 13:24:43.741309  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.741319  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:43.741328  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:43.741393  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:43.780217  688914 cri.go:89] found id: ""
	I0210 13:24:43.780248  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.780259  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:43.780267  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:43.780326  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:43.818627  688914 cri.go:89] found id: ""
	I0210 13:24:43.818660  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.818673  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:43.818681  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:43.818747  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:43.855216  688914 cri.go:89] found id: ""
	I0210 13:24:43.855248  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.855258  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:43.855266  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:43.855331  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:43.889360  688914 cri.go:89] found id: ""
	I0210 13:24:43.889394  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.889402  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:43.889410  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:43.889476  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:43.934224  688914 cri.go:89] found id: ""
	I0210 13:24:43.934258  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.934266  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:43.934273  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:43.934329  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:43.974800  688914 cri.go:89] found id: ""
	I0210 13:24:43.974830  688914 logs.go:282] 0 containers: []
	W0210 13:24:43.974837  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:43.974844  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:43.974897  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:44.017085  688914 cri.go:89] found id: ""
	I0210 13:24:44.017128  688914 logs.go:282] 0 containers: []
	W0210 13:24:44.017139  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:44.017152  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:44.017171  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:44.067430  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:44.067470  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:44.081581  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:44.081618  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:44.153720  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:44.153743  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:44.153810  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:44.235557  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:44.235597  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:41.388557  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Start
	I0210 13:24:41.388729  691489 main.go:141] libmachine: (newest-cni-078760) starting domain...
	I0210 13:24:41.388749  691489 main.go:141] libmachine: (newest-cni-078760) ensuring networks are active...
	I0210 13:24:41.389682  691489 main.go:141] libmachine: (newest-cni-078760) Ensuring network default is active
	I0210 13:24:41.390063  691489 main.go:141] libmachine: (newest-cni-078760) Ensuring network mk-newest-cni-078760 is active
	I0210 13:24:41.390463  691489 main.go:141] libmachine: (newest-cni-078760) getting domain XML...
	I0210 13:24:41.391221  691489 main.go:141] libmachine: (newest-cni-078760) creating domain...
	I0210 13:24:42.616334  691489 main.go:141] libmachine: (newest-cni-078760) waiting for IP...
	I0210 13:24:42.617299  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:42.617829  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:42.617918  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:42.617824  691524 retry.go:31] will retry after 283.264685ms: waiting for domain to come up
	I0210 13:24:42.903325  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:42.904000  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:42.904028  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:42.903933  691524 retry.go:31] will retry after 344.515197ms: waiting for domain to come up
	I0210 13:24:43.250750  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:43.251374  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:43.251425  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:43.251339  691524 retry.go:31] will retry after 393.453533ms: waiting for domain to come up
	I0210 13:24:43.646892  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:43.647502  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:43.647530  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:43.647479  691524 retry.go:31] will retry after 372.747782ms: waiting for domain to come up
	I0210 13:24:44.022175  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:44.022720  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:44.022762  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:44.022643  691524 retry.go:31] will retry after 498.159478ms: waiting for domain to come up
	I0210 13:24:44.522570  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:44.523198  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:44.523228  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:44.523153  691524 retry.go:31] will retry after 604.957125ms: waiting for domain to come up
	I0210 13:24:45.129970  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:45.130451  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:45.130473  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:45.130420  691524 retry.go:31] will retry after 898.332464ms: waiting for domain to come up
	I0210 13:24:46.030650  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:46.031180  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:46.031209  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:46.031128  691524 retry.go:31] will retry after 1.265422975s: waiting for domain to come up
	I0210 13:24:44.271495  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:46.770352  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:46.773208  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:46.785471  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:46.785541  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:46.819010  688914 cri.go:89] found id: ""
	I0210 13:24:46.819043  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.819053  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:46.819061  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:46.819125  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:46.851361  688914 cri.go:89] found id: ""
	I0210 13:24:46.851395  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.851408  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:46.851416  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:46.851489  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:46.887040  688914 cri.go:89] found id: ""
	I0210 13:24:46.887074  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.887086  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:46.887094  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:46.887159  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:46.919719  688914 cri.go:89] found id: ""
	I0210 13:24:46.919752  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.919763  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:46.919780  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:46.919854  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:46.962383  688914 cri.go:89] found id: ""
	I0210 13:24:46.962416  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.962429  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:46.962438  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:46.962510  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:46.997529  688914 cri.go:89] found id: ""
	I0210 13:24:46.997558  688914 logs.go:282] 0 containers: []
	W0210 13:24:46.997567  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:46.997573  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:46.997624  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:47.034666  688914 cri.go:89] found id: ""
	I0210 13:24:47.034698  688914 logs.go:282] 0 containers: []
	W0210 13:24:47.034709  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:47.034717  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:47.034772  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:47.072750  688914 cri.go:89] found id: ""
	I0210 13:24:47.072780  688914 logs.go:282] 0 containers: []
	W0210 13:24:47.072788  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:47.072799  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:47.072811  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:47.126909  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:47.126946  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:47.139755  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:47.139783  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:47.207327  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:47.207369  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:47.207395  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:47.296476  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:47.296530  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:49.839781  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:49.852562  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:49.852630  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:49.887112  688914 cri.go:89] found id: ""
	I0210 13:24:49.887146  688914 logs.go:282] 0 containers: []
	W0210 13:24:49.887160  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:49.887179  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:49.887245  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:49.920850  688914 cri.go:89] found id: ""
	I0210 13:24:49.920878  688914 logs.go:282] 0 containers: []
	W0210 13:24:49.920885  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:49.920891  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:49.920944  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:49.950969  688914 cri.go:89] found id: ""
	I0210 13:24:49.951002  688914 logs.go:282] 0 containers: []
	W0210 13:24:49.951010  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:49.951017  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:49.951074  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:49.985312  688914 cri.go:89] found id: ""
	I0210 13:24:49.985341  688914 logs.go:282] 0 containers: []
	W0210 13:24:49.985350  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:49.985357  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:49.985420  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:50.022609  688914 cri.go:89] found id: ""
	I0210 13:24:50.022643  688914 logs.go:282] 0 containers: []
	W0210 13:24:50.022654  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:50.022662  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:50.022741  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:50.060874  688914 cri.go:89] found id: ""
	I0210 13:24:50.060910  688914 logs.go:282] 0 containers: []
	W0210 13:24:50.060921  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:50.060928  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:50.060995  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:50.105868  688914 cri.go:89] found id: ""
	I0210 13:24:50.105904  688914 logs.go:282] 0 containers: []
	W0210 13:24:50.105916  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:50.105924  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:50.105987  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:47.297831  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:47.298426  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:47.298458  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:47.298379  691524 retry.go:31] will retry after 1.501368767s: waiting for domain to come up
	I0210 13:24:48.802064  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:48.802681  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:48.802713  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:48.802644  691524 retry.go:31] will retry after 1.952900788s: waiting for domain to come up
	I0210 13:24:50.757205  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:50.757657  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:50.757681  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:50.757634  691524 retry.go:31] will retry after 2.841299634s: waiting for domain to come up
	I0210 13:24:48.770842  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:50.771415  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:50.143929  688914 cri.go:89] found id: ""
	I0210 13:24:50.143961  688914 logs.go:282] 0 containers: []
	W0210 13:24:50.143980  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:50.143990  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:50.144006  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:50.205049  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:50.205092  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:50.224083  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:50.224118  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:50.291786  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:50.291812  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:50.291831  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:50.371326  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:50.371371  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:52.919235  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:52.937153  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:24:52.937253  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:24:52.969532  688914 cri.go:89] found id: ""
	I0210 13:24:52.969567  688914 logs.go:282] 0 containers: []
	W0210 13:24:52.969578  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:24:52.969586  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:24:52.969647  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:24:53.002238  688914 cri.go:89] found id: ""
	I0210 13:24:53.002269  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.002280  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:24:53.002287  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:24:53.002362  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:24:53.035346  688914 cri.go:89] found id: ""
	I0210 13:24:53.035376  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.035384  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:24:53.035392  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:24:53.035461  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:24:53.072805  688914 cri.go:89] found id: ""
	I0210 13:24:53.072897  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.072916  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:24:53.072926  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:24:53.073004  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:24:53.110660  688914 cri.go:89] found id: ""
	I0210 13:24:53.110691  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.110702  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:24:53.110712  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:24:53.110780  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:24:53.147192  688914 cri.go:89] found id: ""
	I0210 13:24:53.147222  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.147233  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:24:53.147242  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:24:53.147309  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:24:53.182225  688914 cri.go:89] found id: ""
	I0210 13:24:53.182260  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.182272  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:24:53.182280  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:24:53.182356  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:24:53.222558  688914 cri.go:89] found id: ""
	I0210 13:24:53.222590  688914 logs.go:282] 0 containers: []
	W0210 13:24:53.222601  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:24:53.222614  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:24:53.222630  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:24:53.279358  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:24:53.279408  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:24:53.294748  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:24:53.294787  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:24:53.369719  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:24:53.369745  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:24:53.369762  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:24:53.451596  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:24:53.451639  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:24:53.601402  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:53.601912  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:53.601961  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:53.601883  691524 retry.go:31] will retry after 2.542274821s: waiting for domain to come up
	I0210 13:24:56.146274  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:56.146832  691489 main.go:141] libmachine: (newest-cni-078760) DBG | unable to find current IP address of domain newest-cni-078760 in network mk-newest-cni-078760
	I0210 13:24:56.146863  691489 main.go:141] libmachine: (newest-cni-078760) DBG | I0210 13:24:56.146790  691524 retry.go:31] will retry after 3.125209956s: waiting for domain to come up
	I0210 13:24:52.779375  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:55.269617  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:57.271040  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:24:55.993228  688914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:24:56.005645  688914 kubeadm.go:597] duration metric: took 4m2.60696863s to restartPrimaryControlPlane
	W0210 13:24:56.005721  688914 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0210 13:24:56.005746  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 13:24:56.513498  688914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:24:56.526951  688914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 13:24:56.536360  688914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:24:56.544989  688914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:24:56.545005  688914 kubeadm.go:157] found existing configuration files:
	
	I0210 13:24:56.545053  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:24:56.553248  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:24:56.553299  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:24:56.562196  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:24:56.570708  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:24:56.570756  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:24:56.580086  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:24:56.588161  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:24:56.588207  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:24:56.596487  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:24:56.604340  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:24:56.604385  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:24:56.612499  688914 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 13:24:56.823209  688914 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 13:24:59.274113  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.274657  691489 main.go:141] libmachine: (newest-cni-078760) found domain IP: 192.168.39.250
	I0210 13:24:59.274689  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has current primary IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.274697  691489 main.go:141] libmachine: (newest-cni-078760) reserving static IP address...
	I0210 13:24:59.275163  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "newest-cni-078760", mac: "52:54:00:6b:a1:b8", ip: "192.168.39.250"} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.275200  691489 main.go:141] libmachine: (newest-cni-078760) DBG | skip adding static IP to network mk-newest-cni-078760 - found existing host DHCP lease matching {name: "newest-cni-078760", mac: "52:54:00:6b:a1:b8", ip: "192.168.39.250"}
	I0210 13:24:59.275212  691489 main.go:141] libmachine: (newest-cni-078760) reserved static IP address 192.168.39.250 for domain newest-cni-078760
	I0210 13:24:59.275224  691489 main.go:141] libmachine: (newest-cni-078760) waiting for SSH...
	I0210 13:24:59.275240  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Getting to WaitForSSH function...
	I0210 13:24:59.277564  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.277937  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.277972  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.278049  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Using SSH client type: external
	I0210 13:24:59.278098  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Using SSH private key: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa (-rw-------)
	I0210 13:24:59.278150  691489 main.go:141] libmachine: (newest-cni-078760) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 13:24:59.278164  691489 main.go:141] libmachine: (newest-cni-078760) DBG | About to run SSH command:
	I0210 13:24:59.278172  691489 main.go:141] libmachine: (newest-cni-078760) DBG | exit 0
	I0210 13:24:59.405034  691489 main.go:141] libmachine: (newest-cni-078760) DBG | SSH cmd err, output: <nil>: 
	I0210 13:24:59.405508  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetConfigRaw
	I0210 13:24:59.406149  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetIP
	I0210 13:24:59.408696  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.409061  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.409097  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.409422  691489 profile.go:143] Saving config to /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/config.json ...
	I0210 13:24:59.409617  691489 machine.go:93] provisionDockerMachine start ...
	I0210 13:24:59.409635  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:24:59.409892  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:24:59.412202  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.412549  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.412570  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.412770  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:24:59.412949  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:24:59.413066  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:24:59.413229  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:24:59.413383  691489 main.go:141] libmachine: Using SSH client type: native
	I0210 13:24:59.413675  691489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0210 13:24:59.413693  691489 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 13:24:59.520985  691489 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 13:24:59.521014  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetMachineName
	I0210 13:24:59.521304  691489 buildroot.go:166] provisioning hostname "newest-cni-078760"
	I0210 13:24:59.521348  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetMachineName
	I0210 13:24:59.521546  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:24:59.524011  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.524395  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.524426  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.524511  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:24:59.524677  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:24:59.524830  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:24:59.524930  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:24:59.525090  691489 main.go:141] libmachine: Using SSH client type: native
	I0210 13:24:59.525301  691489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0210 13:24:59.525317  691489 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-078760 && echo "newest-cni-078760" | sudo tee /etc/hostname
	I0210 13:24:59.646397  691489 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-078760
	
	I0210 13:24:59.646428  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:24:59.649460  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.649855  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.649887  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.650122  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:24:59.650345  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:24:59.650510  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:24:59.650661  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:24:59.650865  691489 main.go:141] libmachine: Using SSH client type: native
	I0210 13:24:59.651057  691489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0210 13:24:59.651075  691489 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-078760' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-078760/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-078760' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 13:24:59.765308  691489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 13:24:59.765347  691489 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20383-625153/.minikube CaCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20383-625153/.minikube}
	I0210 13:24:59.765387  691489 buildroot.go:174] setting up certificates
	I0210 13:24:59.765401  691489 provision.go:84] configureAuth start
	I0210 13:24:59.765424  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetMachineName
	I0210 13:24:59.765729  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetIP
	I0210 13:24:59.768971  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.769366  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.769391  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.769640  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:24:59.772244  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.772630  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:24:59.772667  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:24:59.772825  691489 provision.go:143] copyHostCerts
	I0210 13:24:59.772893  691489 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem, removing ...
	I0210 13:24:59.772903  691489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem
	I0210 13:24:59.772968  691489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/ca.pem (1082 bytes)
	I0210 13:24:59.773076  691489 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem, removing ...
	I0210 13:24:59.773084  691489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem
	I0210 13:24:59.773148  691489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/cert.pem (1123 bytes)
	I0210 13:24:59.773228  691489 exec_runner.go:144] found /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem, removing ...
	I0210 13:24:59.773236  691489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem
	I0210 13:24:59.773260  691489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20383-625153/.minikube/key.pem (1675 bytes)
	I0210 13:24:59.773329  691489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem org=jenkins.newest-cni-078760 san=[127.0.0.1 192.168.39.250 localhost minikube newest-cni-078760]
	I0210 13:25:00.289725  691489 provision.go:177] copyRemoteCerts
	I0210 13:25:00.289790  691489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 13:25:00.289817  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:00.292758  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.293115  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:00.293149  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.293357  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:00.293603  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.293811  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:00.293957  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:00.383066  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0210 13:25:00.405672  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 13:25:00.428091  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 13:25:00.448809  691489 provision.go:87] duration metric: took 683.388073ms to configureAuth
	I0210 13:25:00.448837  691489 buildroot.go:189] setting minikube options for container-runtime
	I0210 13:25:00.449011  691489 config.go:182] Loaded profile config "newest-cni-078760": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:25:00.449092  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:00.451834  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.452228  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:00.452255  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.452441  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:00.452649  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.452807  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.452911  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:00.453073  691489 main.go:141] libmachine: Using SSH client type: native
	I0210 13:25:00.453278  691489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0210 13:25:00.453302  691489 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 13:25:00.672251  691489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 13:25:00.672293  691489 machine.go:96] duration metric: took 1.262661195s to provisionDockerMachine
	I0210 13:25:00.672311  691489 start.go:293] postStartSetup for "newest-cni-078760" (driver="kvm2")
	I0210 13:25:00.672325  691489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 13:25:00.672351  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:00.672711  691489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 13:25:00.672751  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:00.675260  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.675668  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:00.675700  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.675807  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:00.675998  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.676205  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:00.676346  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:00.758840  691489 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 13:25:00.762542  691489 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 13:25:00.762567  691489 filesync.go:126] Scanning /home/jenkins/minikube-integration/20383-625153/.minikube/addons for local assets ...
	I0210 13:25:00.762639  691489 filesync.go:126] Scanning /home/jenkins/minikube-integration/20383-625153/.minikube/files for local assets ...
	I0210 13:25:00.762734  691489 filesync.go:149] local asset: /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem -> 6323522.pem in /etc/ssl/certs
	I0210 13:25:00.762860  691489 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 13:25:00.773351  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem --> /etc/ssl/certs/6323522.pem (1708 bytes)
	I0210 13:25:00.796618  691489 start.go:296] duration metric: took 124.2886ms for postStartSetup
	I0210 13:25:00.796673  691489 fix.go:56] duration metric: took 19.429804907s for fixHost
	I0210 13:25:00.796697  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:00.799632  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.799962  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:00.799989  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.800218  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:00.800405  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.800535  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.800642  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:00.800769  691489 main.go:141] libmachine: Using SSH client type: native
	I0210 13:25:00.800931  691489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0210 13:25:00.800941  691489 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 13:25:00.909435  691489 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739193900.883827731
	
	I0210 13:25:00.909467  691489 fix.go:216] guest clock: 1739193900.883827731
	I0210 13:25:00.909475  691489 fix.go:229] Guest: 2025-02-10 13:25:00.883827731 +0000 UTC Remote: 2025-02-10 13:25:00.796678487 +0000 UTC m=+19.572875336 (delta=87.149244ms)
	I0210 13:25:00.909527  691489 fix.go:200] guest clock delta is within tolerance: 87.149244ms
	I0210 13:25:00.909539  691489 start.go:83] releasing machines lock for "newest-cni-078760", held for 19.542688037s
	I0210 13:25:00.909575  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:00.909866  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetIP
	I0210 13:25:00.912692  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.913180  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:00.913209  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.913393  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:00.913968  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:00.914173  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:00.914234  691489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 13:25:00.914286  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:00.914386  691489 ssh_runner.go:195] Run: cat /version.json
	I0210 13:25:00.914413  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:00.917197  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.917270  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.917549  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:00.917577  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.917603  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:00.917618  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:00.917755  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:00.917938  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:00.917969  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.918181  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:00.918186  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:00.918323  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:00.918506  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:00.918627  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:01.016816  691489 ssh_runner.go:195] Run: systemctl --version
	I0210 13:25:01.022398  691489 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 13:25:01.160711  691489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 13:25:01.166231  691489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 13:25:01.166308  691489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 13:25:01.181307  691489 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 13:25:01.181340  691489 start.go:495] detecting cgroup driver to use...
	I0210 13:25:01.181432  691489 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 13:25:01.196599  691489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 13:25:01.210368  691489 docker.go:217] disabling cri-docker service (if available) ...
	I0210 13:25:01.210447  691489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 13:25:01.224277  691489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 13:25:01.237050  691489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 13:25:01.363079  691489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 13:25:01.505721  691489 docker.go:233] disabling docker service ...
	I0210 13:25:01.505798  691489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 13:25:01.519404  691489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 13:25:01.531569  691489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 13:25:01.656701  691489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 13:25:01.761785  691489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 13:25:01.775504  691489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 13:25:01.793265  691489 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0210 13:25:01.793350  691489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:25:01.802631  691489 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 13:25:01.802704  691489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:25:01.811794  691489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:25:01.821081  691489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:25:01.830115  691489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 13:25:01.839351  691489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:25:01.848567  691489 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:25:01.864326  691489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:25:01.874772  691489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 13:25:01.884394  691489 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 13:25:01.884474  691489 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 13:25:01.897647  691489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 13:25:01.906297  691489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:25:02.014414  691489 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 13:25:02.104325  691489 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 13:25:02.104434  691489 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 13:25:02.108842  691489 start.go:563] Will wait 60s for crictl version
	I0210 13:25:02.108917  691489 ssh_runner.go:195] Run: which crictl
	I0210 13:25:02.112360  691489 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 13:25:02.153660  691489 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 13:25:02.153771  691489 ssh_runner.go:195] Run: crio --version
	I0210 13:25:02.180774  691489 ssh_runner.go:195] Run: crio --version
	I0210 13:25:02.212419  691489 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0210 13:25:02.213655  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetIP
	I0210 13:25:02.216337  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:02.216703  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:02.216731  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:02.217046  691489 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0210 13:25:02.221017  691489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:25:02.234095  691489 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0210 13:24:59.770976  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:02.273787  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:02.235371  691489 kubeadm.go:883] updating cluster {Name:newest-cni-078760 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-078760 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 13:25:02.235495  691489 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 13:25:02.235552  691489 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:25:02.269571  691489 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0210 13:25:02.269654  691489 ssh_runner.go:195] Run: which lz4
	I0210 13:25:02.273617  691489 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 13:25:02.277988  691489 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 13:25:02.278024  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0210 13:25:03.523616  691489 crio.go:462] duration metric: took 1.250045789s to copy over tarball
	I0210 13:25:03.523702  691489 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 13:25:05.658254  691489 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.134495502s)
	I0210 13:25:05.658291  691489 crio.go:469] duration metric: took 2.134641092s to extract the tarball
	I0210 13:25:05.658303  691489 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 13:25:05.695477  691489 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:25:05.735472  691489 crio.go:514] all images are preloaded for cri-o runtime.
	I0210 13:25:05.735496  691489 cache_images.go:84] Images are preloaded, skipping loading
	I0210 13:25:05.735505  691489 kubeadm.go:934] updating node { 192.168.39.250 8443 v1.32.1 crio true true} ...
	I0210 13:25:05.735610  691489 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-078760 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-078760 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 13:25:05.735681  691489 ssh_runner.go:195] Run: crio config
	I0210 13:25:05.785195  691489 cni.go:84] Creating CNI manager for ""
	I0210 13:25:05.785224  691489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:25:05.785234  691489 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0210 13:25:05.785263  691489 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.250 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-078760 NodeName:newest-cni-078760 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 13:25:05.785425  691489 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-078760"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.250"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.250"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 13:25:05.785511  691489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 13:25:05.794956  691489 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 13:25:05.795032  691489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 13:25:05.804169  691489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0210 13:25:05.819782  691489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 13:25:05.835103  691489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0210 13:25:05.851153  691489 ssh_runner.go:195] Run: grep 192.168.39.250	control-plane.minikube.internal$ /etc/hosts
	I0210 13:25:05.854677  691489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:25:05.865911  691489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:25:05.995134  691489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:25:06.017449  691489 certs.go:68] Setting up /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760 for IP: 192.168.39.250
	I0210 13:25:06.017475  691489 certs.go:194] generating shared ca certs ...
	I0210 13:25:06.017497  691489 certs.go:226] acquiring lock for ca certs: {Name:mkf2f72f82a6bd7e3c16bb224cd26b80c3c89e0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:25:06.017658  691489 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key
	I0210 13:25:06.017711  691489 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key
	I0210 13:25:06.017726  691489 certs.go:256] generating profile certs ...
	I0210 13:25:06.017814  691489 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/client.key
	I0210 13:25:06.017907  691489 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/apiserver.key.1c0773a6
	I0210 13:25:06.017962  691489 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/proxy-client.key
	I0210 13:25:06.018106  691489 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352.pem (1338 bytes)
	W0210 13:25:06.018145  691489 certs.go:480] ignoring /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352_empty.pem, impossibly tiny 0 bytes
	I0210 13:25:06.018160  691489 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca-key.pem (1675 bytes)
	I0210 13:25:06.018194  691489 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/ca.pem (1082 bytes)
	I0210 13:25:06.018255  691489 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/cert.pem (1123 bytes)
	I0210 13:25:06.018301  691489 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/certs/key.pem (1675 bytes)
	I0210 13:25:06.018360  691489 certs.go:484] found cert: /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem (1708 bytes)
	I0210 13:25:06.019219  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 13:25:06.049870  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 13:25:06.079056  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 13:25:06.111520  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 13:25:06.144808  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0210 13:25:06.170435  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 13:25:06.193477  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 13:25:06.216083  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/newest-cni-078760/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0210 13:25:06.237420  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/ssl/certs/6323522.pem --> /usr/share/ca-certificates/6323522.pem (1708 bytes)
	I0210 13:25:06.259080  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 13:25:04.771284  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:07.270419  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:06.281857  691489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20383-625153/.minikube/certs/632352.pem --> /usr/share/ca-certificates/632352.pem (1338 bytes)
	I0210 13:25:06.303749  691489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 13:25:06.319343  691489 ssh_runner.go:195] Run: openssl version
	I0210 13:25:06.324961  691489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/632352.pem && ln -fs /usr/share/ca-certificates/632352.pem /etc/ssl/certs/632352.pem"
	I0210 13:25:06.334777  691489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/632352.pem
	I0210 13:25:06.338786  691489 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 12:13 /usr/share/ca-certificates/632352.pem
	I0210 13:25:06.338851  691489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/632352.pem
	I0210 13:25:06.344301  691489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/632352.pem /etc/ssl/certs/51391683.0"
	I0210 13:25:06.354153  691489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6323522.pem && ln -fs /usr/share/ca-certificates/6323522.pem /etc/ssl/certs/6323522.pem"
	I0210 13:25:06.363691  691489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6323522.pem
	I0210 13:25:06.367845  691489 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 12:13 /usr/share/ca-certificates/6323522.pem
	I0210 13:25:06.367903  691489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6323522.pem
	I0210 13:25:06.373065  691489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6323522.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 13:25:06.382808  691489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 13:25:06.392603  691489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:25:06.396500  691489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:25:06.396554  691489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:25:06.401622  691489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 13:25:06.411181  691489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 13:25:06.415359  691489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 13:25:06.420593  691489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 13:25:06.426061  691489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 13:25:06.431327  691489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 13:25:06.436533  691489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 13:25:06.441660  691489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 13:25:06.446816  691489 kubeadm.go:392] StartCluster: {Name:newest-cni-078760 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-078760 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mult
iNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:25:06.446895  691489 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 13:25:06.446930  691489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:25:06.483125  691489 cri.go:89] found id: ""
	I0210 13:25:06.483211  691489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 13:25:06.493195  691489 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0210 13:25:06.493227  691489 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0210 13:25:06.493279  691489 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0210 13:25:06.502619  691489 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0210 13:25:06.503337  691489 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-078760" does not appear in /home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:25:06.503714  691489 kubeconfig.go:62] /home/jenkins/minikube-integration/20383-625153/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-078760" cluster setting kubeconfig missing "newest-cni-078760" context setting]
	I0210 13:25:06.504205  691489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/kubeconfig: {Name:mke7ef1ff4ff1259856291979fdd0337df3a08b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:25:06.505630  691489 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0210 13:25:06.514911  691489 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.250
	I0210 13:25:06.514960  691489 kubeadm.go:1160] stopping kube-system containers ...
	I0210 13:25:06.514977  691489 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0210 13:25:06.515037  691489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:25:06.554131  691489 cri.go:89] found id: ""
	I0210 13:25:06.554214  691489 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0210 13:25:06.570574  691489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:25:06.579872  691489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:25:06.579894  691489 kubeadm.go:157] found existing configuration files:
	
	I0210 13:25:06.579940  691489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:25:06.588189  691489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:25:06.588248  691489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:25:06.596978  691489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:25:06.605371  691489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:25:06.605424  691489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:25:06.613792  691489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:25:06.621620  691489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:25:06.621676  691489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:25:06.629800  691489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:25:06.637455  691489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:25:06.637496  691489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:25:06.645304  691489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 13:25:06.653346  691489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:25:06.763579  691489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:25:07.851528  691489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.087906654s)
	I0210 13:25:07.851566  691489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:25:08.057073  691489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:25:08.142252  691489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:25:08.227881  691489 api_server.go:52] waiting for apiserver process to appear ...
	I0210 13:25:08.227987  691489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:25:08.728481  691489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:25:09.228059  691489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:25:09.728607  691489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:25:10.228860  691489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:25:10.310725  691489 api_server.go:72] duration metric: took 2.082844906s to wait for apiserver process to appear ...
	I0210 13:25:10.310754  691489 api_server.go:88] waiting for apiserver healthz status ...
	I0210 13:25:10.310775  691489 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0210 13:25:10.311265  691489 api_server.go:269] stopped: https://192.168.39.250:8443/healthz: Get "https://192.168.39.250:8443/healthz": dial tcp 192.168.39.250:8443: connect: connection refused
	I0210 13:25:10.810910  691489 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0210 13:25:09.289289  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:11.769486  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:12.947266  691489 api_server.go:279] https://192.168.39.250:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 13:25:12.947307  691489 api_server.go:103] status: https://192.168.39.250:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 13:25:12.947327  691489 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0210 13:25:12.971991  691489 api_server.go:279] https://192.168.39.250:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 13:25:12.972028  691489 api_server.go:103] status: https://192.168.39.250:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 13:25:13.311219  691489 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0210 13:25:13.322624  691489 api_server.go:279] https://192.168.39.250:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 13:25:13.322653  691489 api_server.go:103] status: https://192.168.39.250:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 13:25:13.811259  691489 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0210 13:25:13.817960  691489 api_server.go:279] https://192.168.39.250:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 13:25:13.817992  691489 api_server.go:103] status: https://192.168.39.250:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 13:25:14.311715  691489 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0210 13:25:14.319786  691489 api_server.go:279] https://192.168.39.250:8443/healthz returned 200:
	ok
	I0210 13:25:14.327973  691489 api_server.go:141] control plane version: v1.32.1
	I0210 13:25:14.328010  691489 api_server.go:131] duration metric: took 4.017247642s to wait for apiserver health ...
	I0210 13:25:14.328025  691489 cni.go:84] Creating CNI manager for ""
	I0210 13:25:14.328034  691489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:25:14.330184  691489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0210 13:25:14.331476  691489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0210 13:25:14.348249  691489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0210 13:25:14.366751  691489 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 13:25:14.371867  691489 system_pods.go:59] 8 kube-system pods found
	I0210 13:25:14.371912  691489 system_pods.go:61] "coredns-668d6bf9bc-6xmgm" [e079a121-a86a-40b1-ac42-e3c1d4a45d3e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0210 13:25:14.371924  691489 system_pods.go:61] "etcd-newest-cni-078760" [ab03adeb-629d-40cc-b5a7-612855165223] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0210 13:25:14.371934  691489 system_pods.go:61] "kube-apiserver-newest-cni-078760" [d6bb0517-d5ab-4839-8974-f7c6d58dad52] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0210 13:25:14.371943  691489 system_pods.go:61] "kube-controller-manager-newest-cni-078760" [960a3334-7167-4942-8f1c-5a03ea01e628] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0210 13:25:14.371947  691489 system_pods.go:61] "kube-proxy-kd8mx" [951cb4ab-6e99-4be5-87ee-9e9c8eb4c635] Running
	I0210 13:25:14.371958  691489 system_pods.go:61] "kube-scheduler-newest-cni-078760" [bb9270e8-85d5-460e-89b5-49f374c1775d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0210 13:25:14.371964  691489 system_pods.go:61] "metrics-server-f79f97bbb-m2m4m" [9505b23a-756e-405a-a279-9e5a64082f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 13:25:14.371973  691489 system_pods.go:61] "storage-provisioner" [027d0f58-173c-4c51-86c6-461f4393192c] Running
	I0210 13:25:14.371978  691489 system_pods.go:74] duration metric: took 5.204788ms to wait for pod list to return data ...
	I0210 13:25:14.371986  691489 node_conditions.go:102] verifying NodePressure condition ...
	I0210 13:25:14.376210  691489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 13:25:14.376236  691489 node_conditions.go:123] node cpu capacity is 2
	I0210 13:25:14.376248  691489 node_conditions.go:105] duration metric: took 4.255584ms to run NodePressure ...
	I0210 13:25:14.376267  691489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:25:14.658659  691489 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 13:25:14.673616  691489 ops.go:34] apiserver oom_adj: -16
	I0210 13:25:14.673643  691489 kubeadm.go:597] duration metric: took 8.180409154s to restartPrimaryControlPlane
	I0210 13:25:14.673654  691489 kubeadm.go:394] duration metric: took 8.226850795s to StartCluster
	I0210 13:25:14.673678  691489 settings.go:142] acquiring lock: {Name:mk4bd8331d641665e48ff1d1c4382f2e915609be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:25:14.673775  691489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:25:14.674826  691489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20383-625153/kubeconfig: {Name:mke7ef1ff4ff1259856291979fdd0337df3a08b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:25:14.675121  691489 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 13:25:14.675203  691489 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 13:25:14.675305  691489 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-078760"
	I0210 13:25:14.675332  691489 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-078760"
	I0210 13:25:14.675330  691489 config.go:182] Loaded profile config "newest-cni-078760": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	W0210 13:25:14.675339  691489 addons.go:247] addon storage-provisioner should already be in state true
	I0210 13:25:14.675327  691489 addons.go:69] Setting default-storageclass=true in profile "newest-cni-078760"
	I0210 13:25:14.675356  691489 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-078760"
	I0210 13:25:14.675374  691489 host.go:66] Checking if "newest-cni-078760" exists ...
	I0210 13:25:14.675362  691489 addons.go:69] Setting dashboard=true in profile "newest-cni-078760"
	I0210 13:25:14.675406  691489 addons.go:238] Setting addon dashboard=true in "newest-cni-078760"
	I0210 13:25:14.675373  691489 addons.go:69] Setting metrics-server=true in profile "newest-cni-078760"
	W0210 13:25:14.675416  691489 addons.go:247] addon dashboard should already be in state true
	I0210 13:25:14.675439  691489 addons.go:238] Setting addon metrics-server=true in "newest-cni-078760"
	W0210 13:25:14.675452  691489 addons.go:247] addon metrics-server should already be in state true
	I0210 13:25:14.675456  691489 host.go:66] Checking if "newest-cni-078760" exists ...
	I0210 13:25:14.675501  691489 host.go:66] Checking if "newest-cni-078760" exists ...
	I0210 13:25:14.675825  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.675825  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.675865  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.675949  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.675956  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.675998  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.675994  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.676030  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.676626  691489 out.go:177] * Verifying Kubernetes components...
	I0210 13:25:14.677970  691489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:25:14.692819  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41133
	I0210 13:25:14.692863  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43933
	I0210 13:25:14.693307  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.693457  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.693889  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.693917  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.694044  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.694067  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.694275  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.694467  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.694675  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetState
	I0210 13:25:14.694875  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.694910  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.695631  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39397
	I0210 13:25:14.695666  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40755
	I0210 13:25:14.696018  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.696028  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.696521  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.696541  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.696669  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.696690  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.696922  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.697247  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.697481  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.697516  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.697803  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.697850  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.698182  691489 addons.go:238] Setting addon default-storageclass=true in "newest-cni-078760"
	W0210 13:25:14.698206  691489 addons.go:247] addon default-storageclass should already be in state true
	I0210 13:25:14.698236  691489 host.go:66] Checking if "newest-cni-078760" exists ...
	I0210 13:25:14.698612  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.698664  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.713772  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40313
	I0210 13:25:14.714442  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.715026  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.715052  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.715415  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46367
	I0210 13:25:14.715437  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.715597  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetState
	I0210 13:25:14.715945  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.716483  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.716511  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.716848  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.717071  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetState
	I0210 13:25:14.717863  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36917
	I0210 13:25:14.717964  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0210 13:25:14.718191  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:14.718430  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.718536  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.718898  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:14.718993  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.719014  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.719122  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.719136  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.719353  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.719538  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.719570  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetState
	I0210 13:25:14.720089  691489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:25:14.720146  691489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:25:14.720737  691489 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0210 13:25:14.720739  691489 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0210 13:25:14.721144  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:14.722697  691489 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:25:14.722765  691489 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0210 13:25:14.722799  691489 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0210 13:25:14.722826  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:14.724344  691489 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0210 13:25:14.724481  691489 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 13:25:14.724502  691489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 13:25:14.724523  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:14.725362  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0210 13:25:14.725382  691489 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0210 13:25:14.725403  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:14.726853  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.727274  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:14.727299  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.727826  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:14.728040  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:14.728183  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:14.728402  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:14.728481  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.728865  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:14.728895  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.728973  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.729181  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:14.729432  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:14.729516  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:14.729542  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.729579  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:14.729722  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:14.729807  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:14.729972  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:14.730124  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:14.730252  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:14.765255  691489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33493
	I0210 13:25:14.765791  691489 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:25:14.766387  691489 main.go:141] libmachine: Using API Version  1
	I0210 13:25:14.766420  691489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:25:14.766810  691489 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:25:14.767031  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetState
	I0210 13:25:14.768796  691489 main.go:141] libmachine: (newest-cni-078760) Calling .DriverName
	I0210 13:25:14.769012  691489 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 13:25:14.769028  691489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 13:25:14.769046  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHHostname
	I0210 13:25:14.772060  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.772513  691489 main.go:141] libmachine: (newest-cni-078760) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a1:b8", ip: ""} in network mk-newest-cni-078760: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:52 +0000 UTC Type:0 Mac:52:54:00:6b:a1:b8 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:newest-cni-078760 Clientid:01:52:54:00:6b:a1:b8}
	I0210 13:25:14.772563  691489 main.go:141] libmachine: (newest-cni-078760) DBG | domain newest-cni-078760 has defined IP address 192.168.39.250 and MAC address 52:54:00:6b:a1:b8 in network mk-newest-cni-078760
	I0210 13:25:14.772688  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHPort
	I0210 13:25:14.772874  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHKeyPath
	I0210 13:25:14.773046  691489 main.go:141] libmachine: (newest-cni-078760) Calling .GetSSHUsername
	I0210 13:25:14.773224  691489 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/newest-cni-078760/id_rsa Username:docker}
	I0210 13:25:14.847727  691489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:25:14.870840  691489 api_server.go:52] waiting for apiserver process to appear ...
	I0210 13:25:14.870928  691489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:25:14.886084  691489 api_server.go:72] duration metric: took 210.925044ms to wait for apiserver process to appear ...
	I0210 13:25:14.886114  691489 api_server.go:88] waiting for apiserver healthz status ...
	I0210 13:25:14.886139  691489 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0210 13:25:14.890757  691489 api_server.go:279] https://192.168.39.250:8443/healthz returned 200:
	ok
	I0210 13:25:14.891635  691489 api_server.go:141] control plane version: v1.32.1
	I0210 13:25:14.891659  691489 api_server.go:131] duration metric: took 5.538021ms to wait for apiserver health ...
	I0210 13:25:14.891667  691489 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 13:25:14.894919  691489 system_pods.go:59] 8 kube-system pods found
	I0210 13:25:14.894946  691489 system_pods.go:61] "coredns-668d6bf9bc-6xmgm" [e079a121-a86a-40b1-ac42-e3c1d4a45d3e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0210 13:25:14.894957  691489 system_pods.go:61] "etcd-newest-cni-078760" [ab03adeb-629d-40cc-b5a7-612855165223] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0210 13:25:14.894978  691489 system_pods.go:61] "kube-apiserver-newest-cni-078760" [d6bb0517-d5ab-4839-8974-f7c6d58dad52] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0210 13:25:14.894993  691489 system_pods.go:61] "kube-controller-manager-newest-cni-078760" [960a3334-7167-4942-8f1c-5a03ea01e628] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0210 13:25:14.895003  691489 system_pods.go:61] "kube-proxy-kd8mx" [951cb4ab-6e99-4be5-87ee-9e9c8eb4c635] Running
	I0210 13:25:14.895012  691489 system_pods.go:61] "kube-scheduler-newest-cni-078760" [bb9270e8-85d5-460e-89b5-49f374c1775d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0210 13:25:14.895020  691489 system_pods.go:61] "metrics-server-f79f97bbb-m2m4m" [9505b23a-756e-405a-a279-9e5a64082f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 13:25:14.895031  691489 system_pods.go:61] "storage-provisioner" [027d0f58-173c-4c51-86c6-461f4393192c] Running
	I0210 13:25:14.895036  691489 system_pods.go:74] duration metric: took 3.36419ms to wait for pod list to return data ...
	I0210 13:25:14.895046  691489 default_sa.go:34] waiting for default service account to be created ...
	I0210 13:25:14.896970  691489 default_sa.go:45] found service account: "default"
	I0210 13:25:14.896991  691489 default_sa.go:55] duration metric: took 1.936863ms for default service account to be created ...
	I0210 13:25:14.897002  691489 kubeadm.go:582] duration metric: took 221.847464ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0210 13:25:14.897020  691489 node_conditions.go:102] verifying NodePressure condition ...
	I0210 13:25:14.898549  691489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 13:25:14.898572  691489 node_conditions.go:123] node cpu capacity is 2
	I0210 13:25:14.898582  691489 node_conditions.go:105] duration metric: took 1.55688ms to run NodePressure ...
	I0210 13:25:14.898599  691489 start.go:241] waiting for startup goroutines ...
	I0210 13:25:14.932116  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0210 13:25:14.932150  691489 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0210 13:25:14.934060  691489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 13:25:14.952546  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0210 13:25:14.952574  691489 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0210 13:25:15.029473  691489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 13:25:15.031105  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0210 13:25:15.031141  691489 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0210 13:25:15.056497  691489 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0210 13:25:15.056538  691489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0210 13:25:15.095190  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0210 13:25:15.095224  691489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0210 13:25:15.121346  691489 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0210 13:25:15.121374  691489 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0210 13:25:15.153148  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0210 13:25:15.153179  691489 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0210 13:25:15.216706  691489 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 13:25:15.216746  691489 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0210 13:25:15.241907  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0210 13:25:15.241943  691489 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0210 13:25:15.302673  691489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 13:25:15.365047  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0210 13:25:15.365100  691489 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0210 13:25:15.440460  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0210 13:25:15.440489  691489 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0210 13:25:15.518952  691489 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 13:25:15.518987  691489 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0210 13:25:15.565860  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:15.565890  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:15.566253  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:15.566279  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:15.566278  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Closing plugin on server side
	I0210 13:25:15.566296  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:15.566308  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:15.566612  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:15.566656  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:15.576240  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:15.576264  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:15.576535  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:15.576557  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:15.576595  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Closing plugin on server side
	I0210 13:25:15.580109  691489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 13:25:16.740012  691489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.71049179s)
	I0210 13:25:16.740081  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:16.740093  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:16.740447  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:16.740469  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:16.740478  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:16.740487  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:16.740747  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:16.740797  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:16.740830  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Closing plugin on server side
	I0210 13:25:16.805424  691489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.502701364s)
	I0210 13:25:16.805480  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:16.805494  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:16.805796  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Closing plugin on server side
	I0210 13:25:16.805817  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:16.805851  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:16.805880  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:16.805893  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:16.806125  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:16.806141  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:16.806142  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Closing plugin on server side
	I0210 13:25:16.806153  691489 addons.go:479] Verifying addon metrics-server=true in "newest-cni-078760"
	I0210 13:25:17.452174  691489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.872018184s)
	I0210 13:25:17.452259  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:17.452280  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:17.452708  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:17.452733  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:17.452748  691489 main.go:141] libmachine: Making call to close driver server
	I0210 13:25:17.452742  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Closing plugin on server side
	I0210 13:25:17.452757  691489 main.go:141] libmachine: (newest-cni-078760) Calling .Close
	I0210 13:25:17.453057  691489 main.go:141] libmachine: (newest-cni-078760) DBG | Closing plugin on server side
	I0210 13:25:17.453089  691489 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:25:17.453098  691489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:25:17.455198  691489 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-078760 addons enable metrics-server
	
	I0210 13:25:17.456604  691489 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0210 13:25:17.458205  691489 addons.go:514] duration metric: took 2.782999976s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0210 13:25:17.458254  691489 start.go:246] waiting for cluster config update ...
	I0210 13:25:17.458273  691489 start.go:255] writing updated cluster config ...
	I0210 13:25:17.458614  691489 ssh_runner.go:195] Run: rm -f paused
	I0210 13:25:17.524434  691489 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0210 13:25:17.526201  691489 out.go:177] * Done! kubectl is now configured to use "newest-cni-078760" cluster and "default" namespace by default
	I0210 13:25:13.769744  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:15.770291  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:18.270374  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:20.270770  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:22.769900  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:24.770480  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:27.269398  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:29.270791  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:31.769785  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:34.269730  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:36.270751  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:38.770282  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:41.270569  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:43.769870  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:46.269860  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:48.269910  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:50.770287  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:53.270301  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:55.769898  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:25:57.770053  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:00.270852  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:02.769689  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:04.770190  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:06.770226  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:09.271157  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:11.770318  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:14.269317  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:16.270215  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:18.770402  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:21.269667  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:23.275443  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:25.770573  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:28.270716  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:30.271759  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:32.770603  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:35.269945  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:37.769930  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:39.783553  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:42.271101  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:44.774027  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:47.270211  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:49.771412  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:52.271199  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:52.767674  688914 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 13:26:52.767807  688914 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 13:26:52.769626  688914 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 13:26:52.769700  688914 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 13:26:52.769810  688914 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 13:26:52.769934  688914 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 13:26:52.770031  688914 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 13:26:52.770114  688914 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 13:26:52.771972  688914 out.go:235]   - Generating certificates and keys ...
	I0210 13:26:52.772065  688914 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 13:26:52.772157  688914 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 13:26:52.772272  688914 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 13:26:52.772338  688914 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 13:26:52.772402  688914 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 13:26:52.772464  688914 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 13:26:52.772523  688914 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 13:26:52.772581  688914 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 13:26:52.772660  688914 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 13:26:52.772734  688914 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 13:26:52.772770  688914 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 13:26:52.772822  688914 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 13:26:52.772867  688914 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 13:26:52.772917  688914 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 13:26:52.772974  688914 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 13:26:52.773022  688914 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 13:26:52.773151  688914 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 13:26:52.773258  688914 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 13:26:52.773305  688914 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 13:26:52.773386  688914 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 13:26:52.774698  688914 out.go:235]   - Booting up control plane ...
	I0210 13:26:52.774783  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 13:26:52.774853  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 13:26:52.774915  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 13:26:52.775002  688914 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 13:26:52.775179  688914 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 13:26:52.775244  688914 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 13:26:52.775340  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:26:52.775545  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:26:52.775613  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:26:52.775783  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:26:52.775841  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:26:52.776005  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:26:52.776090  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:26:52.776307  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:26:52.776424  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:26:52.776602  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:26:52.776616  688914 kubeadm.go:310] 
	I0210 13:26:52.776653  688914 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 13:26:52.776690  688914 kubeadm.go:310] 		timed out waiting for the condition
	I0210 13:26:52.776699  688914 kubeadm.go:310] 
	I0210 13:26:52.776733  688914 kubeadm.go:310] 	This error is likely caused by:
	I0210 13:26:52.776763  688914 kubeadm.go:310] 		- The kubelet is not running
	I0210 13:26:52.776850  688914 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 13:26:52.776856  688914 kubeadm.go:310] 
	I0210 13:26:52.776949  688914 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 13:26:52.776979  688914 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 13:26:52.777011  688914 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 13:26:52.777017  688914 kubeadm.go:310] 
	I0210 13:26:52.777134  688914 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 13:26:52.777239  688914 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 13:26:52.777252  688914 kubeadm.go:310] 
	I0210 13:26:52.777401  688914 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 13:26:52.777543  688914 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 13:26:52.777651  688914 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 13:26:52.777721  688914 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 13:26:52.777789  688914 kubeadm.go:310] 
	W0210 13:26:52.777852  688914 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0210 13:26:52.777903  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 13:26:54.770289  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:56.770506  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:26:58.074596  688914 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.296665584s)
	I0210 13:26:58.074683  688914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:26:58.091152  688914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:26:58.102648  688914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:26:58.102673  688914 kubeadm.go:157] found existing configuration files:
	
	I0210 13:26:58.102740  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:26:58.113654  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:26:58.113729  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:26:58.124863  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:26:58.135257  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:26:58.135321  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:26:58.145634  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:26:58.154591  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:26:58.154654  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:26:58.163835  688914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:26:58.172611  688914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:26:58.172679  688914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:26:58.182392  688914 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 13:26:58.251261  688914 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 13:26:58.251358  688914 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 13:26:58.383309  688914 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 13:26:58.383435  688914 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 13:26:58.383542  688914 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 13:26:58.550776  688914 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 13:26:58.552680  688914 out.go:235]   - Generating certificates and keys ...
	I0210 13:26:58.552793  688914 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 13:26:58.552881  688914 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 13:26:58.553007  688914 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 13:26:58.553091  688914 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 13:26:58.553226  688914 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 13:26:58.553329  688914 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 13:26:58.553420  688914 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 13:26:58.553525  688914 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 13:26:58.553642  688914 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 13:26:58.553774  688914 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 13:26:58.553837  688914 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 13:26:58.553918  688914 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 13:26:58.654826  688914 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 13:26:58.871525  688914 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 13:26:59.121959  688914 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 13:26:59.254004  688914 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 13:26:59.268822  688914 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 13:26:59.269202  688914 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 13:26:59.269279  688914 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 13:26:59.410011  688914 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 13:26:59.412184  688914 out.go:235]   - Booting up control plane ...
	I0210 13:26:59.412320  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 13:26:59.425128  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 13:26:59.426554  688914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 13:26:59.427605  688914 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 13:26:59.433353  688914 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 13:26:59.270125  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:01.270335  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:03.770196  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:06.270103  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:08.770078  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:11.269430  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:13.770250  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:16.269952  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:18.270261  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:20.270697  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:22.768944  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:24.770265  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:27.269151  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:29.270121  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:31.271007  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:33.769366  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:35.769901  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:39.435230  688914 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 13:27:39.435410  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:27:39.435648  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:27:38.270194  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:40.770209  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:44.436555  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:27:44.436828  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:27:42.770480  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:45.270561  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:47.770652  689817 pod_ready.go:103] pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace has status "Ready":"False"
	I0210 13:27:49.770343  689817 pod_ready.go:82] duration metric: took 4m0.005913971s for pod "metrics-server-f79f97bbb-sg6xj" in "kube-system" namespace to be "Ready" ...
	E0210 13:27:49.770375  689817 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0210 13:27:49.770383  689817 pod_ready.go:39] duration metric: took 4m9.41326084s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 13:27:49.770402  689817 api_server.go:52] waiting for apiserver process to appear ...
	I0210 13:27:49.770454  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:27:49.770518  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:27:49.817157  689817 cri.go:89] found id: "d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a"
	I0210 13:27:49.817183  689817 cri.go:89] found id: ""
	I0210 13:27:49.817192  689817 logs.go:282] 1 containers: [d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a]
	I0210 13:27:49.817252  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:49.821670  689817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:27:49.821737  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:27:49.857058  689817 cri.go:89] found id: "92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9"
	I0210 13:27:49.857087  689817 cri.go:89] found id: ""
	I0210 13:27:49.857096  689817 logs.go:282] 1 containers: [92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9]
	I0210 13:27:49.857182  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:49.861432  689817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:27:49.861505  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:27:49.897872  689817 cri.go:89] found id: "c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844"
	I0210 13:27:49.897903  689817 cri.go:89] found id: ""
	I0210 13:27:49.897914  689817 logs.go:282] 1 containers: [c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844]
	I0210 13:27:49.897982  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:49.902266  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:27:49.902339  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:27:49.944231  689817 cri.go:89] found id: "e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31"
	I0210 13:27:49.944261  689817 cri.go:89] found id: ""
	I0210 13:27:49.944272  689817 logs.go:282] 1 containers: [e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31]
	I0210 13:27:49.944336  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:49.948503  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:27:49.948579  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:27:49.990016  689817 cri.go:89] found id: "e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225"
	I0210 13:27:49.990040  689817 cri.go:89] found id: ""
	I0210 13:27:49.990048  689817 logs.go:282] 1 containers: [e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225]
	I0210 13:27:49.990106  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:49.994001  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:27:49.994060  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:27:50.027512  689817 cri.go:89] found id: "ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa"
	I0210 13:27:50.027538  689817 cri.go:89] found id: ""
	I0210 13:27:50.027549  689817 logs.go:282] 1 containers: [ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa]
	I0210 13:27:50.027614  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:50.031763  689817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:27:50.031823  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:27:50.066416  689817 cri.go:89] found id: ""
	I0210 13:27:50.066448  689817 logs.go:282] 0 containers: []
	W0210 13:27:50.066459  689817 logs.go:284] No container was found matching "kindnet"
	I0210 13:27:50.066467  689817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:27:50.066535  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:27:50.101054  689817 cri.go:89] found id: "bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0"
	I0210 13:27:50.101076  689817 cri.go:89] found id: ""
	I0210 13:27:50.101084  689817 logs.go:282] 1 containers: [bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0]
	I0210 13:27:50.101151  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:50.104987  689817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0210 13:27:50.105056  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 13:27:50.142580  689817 cri.go:89] found id: "e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863"
	I0210 13:27:50.142608  689817 cri.go:89] found id: "9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a"
	I0210 13:27:50.142614  689817 cri.go:89] found id: ""
	I0210 13:27:50.142624  689817 logs.go:282] 2 containers: [e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863 9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a]
	I0210 13:27:50.142692  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:50.146540  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:50.150056  689817 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:27:50.150079  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 13:27:50.311229  689817 logs.go:123] Gathering logs for kube-apiserver [d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a] ...
	I0210 13:27:50.311279  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a"
	I0210 13:27:50.366011  689817 logs.go:123] Gathering logs for etcd [92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9] ...
	I0210 13:27:50.366046  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9"
	I0210 13:27:50.412490  689817 logs.go:123] Gathering logs for kube-controller-manager [ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa] ...
	I0210 13:27:50.412523  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa"
	I0210 13:27:50.476890  689817 logs.go:123] Gathering logs for kubelet ...
	I0210 13:27:50.476940  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:27:50.571913  689817 logs.go:123] Gathering logs for coredns [c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844] ...
	I0210 13:27:50.571960  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844"
	I0210 13:27:50.606241  689817 logs.go:123] Gathering logs for kube-proxy [e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225] ...
	I0210 13:27:50.606284  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225"
	I0210 13:27:50.640859  689817 logs.go:123] Gathering logs for storage-provisioner [e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863] ...
	I0210 13:27:50.640895  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863"
	I0210 13:27:50.675943  689817 logs.go:123] Gathering logs for storage-provisioner [9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a] ...
	I0210 13:27:50.675979  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a"
	I0210 13:27:50.708397  689817 logs.go:123] Gathering logs for container status ...
	I0210 13:27:50.708447  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:27:50.759969  689817 logs.go:123] Gathering logs for dmesg ...
	I0210 13:27:50.760002  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:27:50.773795  689817 logs.go:123] Gathering logs for kube-scheduler [e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31] ...
	I0210 13:27:50.773827  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31"
	I0210 13:27:50.808393  689817 logs.go:123] Gathering logs for kubernetes-dashboard [bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0] ...
	I0210 13:27:50.808426  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0"
	I0210 13:27:50.841955  689817 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:27:50.841988  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:27:54.437160  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:27:54.437400  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:27:53.852846  689817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:27:53.869585  689817 api_server.go:72] duration metric: took 4m20.830334356s to wait for apiserver process to appear ...
	I0210 13:27:53.869618  689817 api_server.go:88] waiting for apiserver healthz status ...
	I0210 13:27:53.869665  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:27:53.869721  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:27:53.907655  689817 cri.go:89] found id: "d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a"
	I0210 13:27:53.907686  689817 cri.go:89] found id: ""
	I0210 13:27:53.907695  689817 logs.go:282] 1 containers: [d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a]
	I0210 13:27:53.907758  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:53.911810  689817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:27:53.911893  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:27:53.952378  689817 cri.go:89] found id: "92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9"
	I0210 13:27:53.952414  689817 cri.go:89] found id: ""
	I0210 13:27:53.952424  689817 logs.go:282] 1 containers: [92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9]
	I0210 13:27:53.952481  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:53.956365  689817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:27:53.956441  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:27:53.991382  689817 cri.go:89] found id: "c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844"
	I0210 13:27:53.991419  689817 cri.go:89] found id: ""
	I0210 13:27:53.991428  689817 logs.go:282] 1 containers: [c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844]
	I0210 13:27:53.991485  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:53.995300  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:27:53.995386  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:27:54.029032  689817 cri.go:89] found id: "e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31"
	I0210 13:27:54.029061  689817 cri.go:89] found id: ""
	I0210 13:27:54.029071  689817 logs.go:282] 1 containers: [e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31]
	I0210 13:27:54.029148  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:54.032926  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:27:54.032978  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:27:54.070279  689817 cri.go:89] found id: "e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225"
	I0210 13:27:54.070310  689817 cri.go:89] found id: ""
	I0210 13:27:54.070321  689817 logs.go:282] 1 containers: [e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225]
	I0210 13:27:54.070380  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:54.074168  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:27:54.074254  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:27:54.108632  689817 cri.go:89] found id: "ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa"
	I0210 13:27:54.108665  689817 cri.go:89] found id: ""
	I0210 13:27:54.108676  689817 logs.go:282] 1 containers: [ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa]
	I0210 13:27:54.108752  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:54.112693  689817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:27:54.112777  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:27:54.147138  689817 cri.go:89] found id: ""
	I0210 13:27:54.147170  689817 logs.go:282] 0 containers: []
	W0210 13:27:54.147178  689817 logs.go:284] No container was found matching "kindnet"
	I0210 13:27:54.147185  689817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:27:54.147247  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:27:54.183531  689817 cri.go:89] found id: "bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0"
	I0210 13:27:54.183555  689817 cri.go:89] found id: ""
	I0210 13:27:54.183563  689817 logs.go:282] 1 containers: [bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0]
	I0210 13:27:54.183620  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:54.187900  689817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0210 13:27:54.187970  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 13:27:54.224779  689817 cri.go:89] found id: "e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863"
	I0210 13:27:54.224803  689817 cri.go:89] found id: "9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a"
	I0210 13:27:54.224807  689817 cri.go:89] found id: ""
	I0210 13:27:54.224815  689817 logs.go:282] 2 containers: [e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863 9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a]
	I0210 13:27:54.224870  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:54.229251  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:54.232955  689817 logs.go:123] Gathering logs for coredns [c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844] ...
	I0210 13:27:54.232973  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844"
	I0210 13:27:54.266570  689817 logs.go:123] Gathering logs for kube-controller-manager [ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa] ...
	I0210 13:27:54.266604  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa"
	I0210 13:27:54.343214  689817 logs.go:123] Gathering logs for storage-provisioner [e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863] ...
	I0210 13:27:54.343252  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863"
	I0210 13:27:54.376776  689817 logs.go:123] Gathering logs for kubernetes-dashboard [bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0] ...
	I0210 13:27:54.376808  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0"
	I0210 13:27:54.410609  689817 logs.go:123] Gathering logs for storage-provisioner [9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a] ...
	I0210 13:27:54.410639  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a"
	I0210 13:27:54.443452  689817 logs.go:123] Gathering logs for kubelet ...
	I0210 13:27:54.443478  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:27:54.527929  689817 logs.go:123] Gathering logs for dmesg ...
	I0210 13:27:54.527979  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:27:54.542227  689817 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:27:54.542268  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 13:27:54.641377  689817 logs.go:123] Gathering logs for etcd [92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9] ...
	I0210 13:27:54.641418  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9"
	I0210 13:27:54.688223  689817 logs.go:123] Gathering logs for kube-proxy [e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225] ...
	I0210 13:27:54.688271  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225"
	I0210 13:27:54.725502  689817 logs.go:123] Gathering logs for kube-apiserver [d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a] ...
	I0210 13:27:54.725539  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a"
	I0210 13:27:54.765130  689817 logs.go:123] Gathering logs for kube-scheduler [e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31] ...
	I0210 13:27:54.765167  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31"
	I0210 13:27:54.800179  689817 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:27:54.800207  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:27:55.252259  689817 logs.go:123] Gathering logs for container status ...
	I0210 13:27:55.252300  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:27:57.789687  689817 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8444/healthz ...
	I0210 13:27:57.794618  689817 api_server.go:279] https://192.168.50.61:8444/healthz returned 200:
	ok
	I0210 13:27:57.795699  689817 api_server.go:141] control plane version: v1.32.1
	I0210 13:27:57.795724  689817 api_server.go:131] duration metric: took 3.926098165s to wait for apiserver health ...
	I0210 13:27:57.795735  689817 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 13:27:57.795772  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:27:57.795820  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:27:57.829148  689817 cri.go:89] found id: "d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a"
	I0210 13:27:57.829179  689817 cri.go:89] found id: ""
	I0210 13:27:57.829190  689817 logs.go:282] 1 containers: [d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a]
	I0210 13:27:57.829265  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:57.833209  689817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:27:57.833272  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:27:57.865761  689817 cri.go:89] found id: "92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9"
	I0210 13:27:57.865789  689817 cri.go:89] found id: ""
	I0210 13:27:57.865799  689817 logs.go:282] 1 containers: [92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9]
	I0210 13:27:57.865866  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:57.869409  689817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:27:57.869480  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:27:57.905847  689817 cri.go:89] found id: "c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844"
	I0210 13:27:57.905875  689817 cri.go:89] found id: ""
	I0210 13:27:57.905886  689817 logs.go:282] 1 containers: [c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844]
	I0210 13:27:57.905956  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:57.911821  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:27:57.911896  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:27:57.950779  689817 cri.go:89] found id: "e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31"
	I0210 13:27:57.950803  689817 cri.go:89] found id: ""
	I0210 13:27:57.950810  689817 logs.go:282] 1 containers: [e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31]
	I0210 13:27:57.950880  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:57.954573  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:27:57.954651  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:27:57.991678  689817 cri.go:89] found id: "e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225"
	I0210 13:27:57.991705  689817 cri.go:89] found id: ""
	I0210 13:27:57.991717  689817 logs.go:282] 1 containers: [e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225]
	I0210 13:27:57.991772  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:57.995971  689817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:27:57.996063  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:27:58.029073  689817 cri.go:89] found id: "ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa"
	I0210 13:27:58.029098  689817 cri.go:89] found id: ""
	I0210 13:27:58.029144  689817 logs.go:282] 1 containers: [ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa]
	I0210 13:27:58.029212  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:58.034012  689817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:27:58.034073  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:27:58.071316  689817 cri.go:89] found id: ""
	I0210 13:27:58.071346  689817 logs.go:282] 0 containers: []
	W0210 13:27:58.071358  689817 logs.go:284] No container was found matching "kindnet"
	I0210 13:27:58.071367  689817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:27:58.071438  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:27:58.105280  689817 cri.go:89] found id: "bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0"
	I0210 13:27:58.105308  689817 cri.go:89] found id: ""
	I0210 13:27:58.105319  689817 logs.go:282] 1 containers: [bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0]
	I0210 13:27:58.105390  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:58.109074  689817 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0210 13:27:58.109169  689817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 13:27:58.141391  689817 cri.go:89] found id: "e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863"
	I0210 13:27:58.141415  689817 cri.go:89] found id: "9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a"
	I0210 13:27:58.141422  689817 cri.go:89] found id: ""
	I0210 13:27:58.141432  689817 logs.go:282] 2 containers: [e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863 9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a]
	I0210 13:27:58.141490  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:58.144977  689817 ssh_runner.go:195] Run: which crictl
	I0210 13:27:58.148249  689817 logs.go:123] Gathering logs for kube-controller-manager [ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa] ...
	I0210 13:27:58.148272  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac2812f3cb686632428ed2027438bdc46bec1328925155fe9268bc7049fcdfaa"
	I0210 13:27:58.201328  689817 logs.go:123] Gathering logs for kubelet ...
	I0210 13:27:58.201360  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:27:58.296953  689817 logs.go:123] Gathering logs for dmesg ...
	I0210 13:27:58.297010  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:27:58.311276  689817 logs.go:123] Gathering logs for etcd [92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9] ...
	I0210 13:27:58.311312  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92f9c8b03501892b9afb12729375bb29bc3a99633166557dc27a11936957bbc9"
	I0210 13:27:58.361415  689817 logs.go:123] Gathering logs for coredns [c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844] ...
	I0210 13:27:58.361452  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c69258e06a494a7d31ccc77bb451b1e2def37ce0dbe569f1723c6375331b9844"
	I0210 13:27:58.396072  689817 logs.go:123] Gathering logs for kube-apiserver [d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a] ...
	I0210 13:27:58.396109  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d627e9b7c3fa6a5649d806da19c32d2f57dec9716719b51822e8311467253f6a"
	I0210 13:27:58.448027  689817 logs.go:123] Gathering logs for kube-scheduler [e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31] ...
	I0210 13:27:58.448064  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e43d7d065b40a13592892ab82740b9367609ef546af51f15305a3ed11cab3a31"
	I0210 13:27:58.481535  689817 logs.go:123] Gathering logs for kube-proxy [e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225] ...
	I0210 13:27:58.481573  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2562695b2f325b7a32533b0bf97a658ef2ae704f29306876f7f4ed5ae008225"
	I0210 13:27:58.514411  689817 logs.go:123] Gathering logs for kubernetes-dashboard [bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0] ...
	I0210 13:27:58.514445  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfd354ff383241bb1649bf5b0926b21820572cd5cdd6b975c30be9b3b1a3e9b0"
	I0210 13:27:58.549570  689817 logs.go:123] Gathering logs for storage-provisioner [9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a] ...
	I0210 13:27:58.549603  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c223eae833ec780bdacab65f7a885047e04740e48b7cd90dbdce507d087969a"
	I0210 13:27:58.592297  689817 logs.go:123] Gathering logs for container status ...
	I0210 13:27:58.592330  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:27:58.631626  689817 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:27:58.631667  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 13:27:58.727480  689817 logs.go:123] Gathering logs for storage-provisioner [e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863] ...
	I0210 13:27:58.727519  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7104d3c43d5daf3e430fe8c4dbd2854e5da62e7f1002ea1cbcf90e2662e5863"
	I0210 13:27:58.760031  689817 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:27:58.760069  689817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:28:01.664367  689817 system_pods.go:59] 8 kube-system pods found
	I0210 13:28:01.664422  689817 system_pods.go:61] "coredns-668d6bf9bc-fj2zq" [583359d8-8ada-4747-8682-6176db3f798a] Running
	I0210 13:28:01.664431  689817 system_pods.go:61] "etcd-default-k8s-diff-port-957542" [15bd93be-c696-42f6-9406-abe5d824a9d0] Running
	I0210 13:28:01.664436  689817 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-957542" [475365bf-2504-46d7-a068-5f5e3a9c773e] Running
	I0210 13:28:01.664442  689817 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-957542" [21fcb133-d0ed-4608-8d25-3719f15d0aaa] Running
	I0210 13:28:01.664446  689817 system_pods.go:61] "kube-proxy-8th94" [1e1a48fd-55a5-48e4-84dc-638f9d650e12] Running
	I0210 13:28:01.664451  689817 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-957542" [1bbe3544-9217-4b50-9903-8b0edf49f996] Running
	I0210 13:28:01.664459  689817 system_pods.go:61] "metrics-server-f79f97bbb-sg6xj" [4fd14781-7917-44e7-8358-2ae86a7bac81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 13:28:01.664465  689817 system_pods.go:61] "storage-provisioner" [30e8603f-89cf-4919-9bf4-bcece8c32934] Running
	I0210 13:28:01.664478  689817 system_pods.go:74] duration metric: took 3.868731638s to wait for pod list to return data ...
	I0210 13:28:01.664492  689817 default_sa.go:34] waiting for default service account to be created ...
	I0210 13:28:01.666845  689817 default_sa.go:45] found service account: "default"
	I0210 13:28:01.666865  689817 default_sa.go:55] duration metric: took 2.365764ms for default service account to be created ...
	I0210 13:28:01.666874  689817 system_pods.go:116] waiting for k8s-apps to be running ...
	I0210 13:28:01.669411  689817 system_pods.go:86] 8 kube-system pods found
	I0210 13:28:01.669440  689817 system_pods.go:89] "coredns-668d6bf9bc-fj2zq" [583359d8-8ada-4747-8682-6176db3f798a] Running
	I0210 13:28:01.669446  689817 system_pods.go:89] "etcd-default-k8s-diff-port-957542" [15bd93be-c696-42f6-9406-abe5d824a9d0] Running
	I0210 13:28:01.669451  689817 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-957542" [475365bf-2504-46d7-a068-5f5e3a9c773e] Running
	I0210 13:28:01.669455  689817 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-957542" [21fcb133-d0ed-4608-8d25-3719f15d0aaa] Running
	I0210 13:28:01.669459  689817 system_pods.go:89] "kube-proxy-8th94" [1e1a48fd-55a5-48e4-84dc-638f9d650e12] Running
	I0210 13:28:01.669463  689817 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-957542" [1bbe3544-9217-4b50-9903-8b0edf49f996] Running
	I0210 13:28:01.669469  689817 system_pods.go:89] "metrics-server-f79f97bbb-sg6xj" [4fd14781-7917-44e7-8358-2ae86a7bac81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 13:28:01.669474  689817 system_pods.go:89] "storage-provisioner" [30e8603f-89cf-4919-9bf4-bcece8c32934] Running
	I0210 13:28:01.669482  689817 system_pods.go:126] duration metric: took 2.601853ms to wait for k8s-apps to be running ...
	I0210 13:28:01.669489  689817 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 13:28:01.669552  689817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:28:01.684641  689817 system_svc.go:56] duration metric: took 15.145438ms WaitForService to wait for kubelet
	I0210 13:28:01.684677  689817 kubeadm.go:582] duration metric: took 4m28.645432042s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 13:28:01.684724  689817 node_conditions.go:102] verifying NodePressure condition ...
	I0210 13:28:01.687051  689817 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 13:28:01.687081  689817 node_conditions.go:123] node cpu capacity is 2
	I0210 13:28:01.687115  689817 node_conditions.go:105] duration metric: took 2.383739ms to run NodePressure ...
	I0210 13:28:01.687149  689817 start.go:241] waiting for startup goroutines ...
	I0210 13:28:01.687161  689817 start.go:246] waiting for cluster config update ...
	I0210 13:28:01.687172  689817 start.go:255] writing updated cluster config ...
	I0210 13:28:01.687476  689817 ssh_runner.go:195] Run: rm -f paused
	I0210 13:28:01.739316  689817 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0210 13:28:01.741286  689817 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-957542" cluster and "default" namespace by default
	I0210 13:28:14.437678  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:28:14.437931  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:28:54.436979  688914 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:28:54.437271  688914 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:28:54.437281  688914 kubeadm.go:310] 
	I0210 13:28:54.437319  688914 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 13:28:54.437355  688914 kubeadm.go:310] 		timed out waiting for the condition
	I0210 13:28:54.437361  688914 kubeadm.go:310] 
	I0210 13:28:54.437390  688914 kubeadm.go:310] 	This error is likely caused by:
	I0210 13:28:54.437468  688914 kubeadm.go:310] 		- The kubelet is not running
	I0210 13:28:54.437614  688914 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 13:28:54.437628  688914 kubeadm.go:310] 
	I0210 13:28:54.437762  688914 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 13:28:54.437806  688914 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 13:28:54.437850  688914 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 13:28:54.437863  688914 kubeadm.go:310] 
	I0210 13:28:54.437986  688914 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 13:28:54.438064  688914 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 13:28:54.438084  688914 kubeadm.go:310] 
	I0210 13:28:54.438245  688914 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 13:28:54.438388  688914 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 13:28:54.438510  688914 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 13:28:54.438608  688914 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 13:28:54.438622  688914 kubeadm.go:310] 
	I0210 13:28:54.439017  688914 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 13:28:54.439094  688914 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 13:28:54.439183  688914 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 13:28:54.439220  688914 kubeadm.go:394] duration metric: took 8m1.096783715s to StartCluster
	I0210 13:28:54.439356  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:28:54.439446  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:28:54.481711  688914 cri.go:89] found id: ""
	I0210 13:28:54.481745  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.481753  688914 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:28:54.481759  688914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:28:54.481826  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:28:54.515485  688914 cri.go:89] found id: ""
	I0210 13:28:54.515513  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.515521  688914 logs.go:284] No container was found matching "etcd"
	I0210 13:28:54.515528  688914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:28:54.515585  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:28:54.565719  688914 cri.go:89] found id: ""
	I0210 13:28:54.565767  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.565779  688914 logs.go:284] No container was found matching "coredns"
	I0210 13:28:54.565788  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:28:54.565864  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:28:54.597764  688914 cri.go:89] found id: ""
	I0210 13:28:54.597806  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.597814  688914 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:28:54.597821  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:28:54.597888  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:28:54.631935  688914 cri.go:89] found id: ""
	I0210 13:28:54.631965  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.631975  688914 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:28:54.631982  688914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:28:54.632052  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:28:54.664095  688914 cri.go:89] found id: ""
	I0210 13:28:54.664135  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.664147  688914 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:28:54.664154  688914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:28:54.664213  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:28:54.695397  688914 cri.go:89] found id: ""
	I0210 13:28:54.695433  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.695445  688914 logs.go:284] No container was found matching "kindnet"
	I0210 13:28:54.695454  688914 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 13:28:54.695522  688914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 13:28:54.732080  688914 cri.go:89] found id: ""
	I0210 13:28:54.732115  688914 logs.go:282] 0 containers: []
	W0210 13:28:54.732127  688914 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 13:28:54.732150  688914 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:28:54.732163  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:28:54.838309  688914 logs.go:123] Gathering logs for container status ...
	I0210 13:28:54.838352  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:28:54.876415  688914 logs.go:123] Gathering logs for kubelet ...
	I0210 13:28:54.876444  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:28:54.925312  688914 logs.go:123] Gathering logs for dmesg ...
	I0210 13:28:54.925353  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:28:54.938075  688914 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:28:54.938108  688914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:28:55.007575  688914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0210 13:28:55.007606  688914 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0210 13:28:55.007664  688914 out.go:270] * 
	W0210 13:28:55.007737  688914 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 13:28:55.007760  688914 out.go:270] * 
	W0210 13:28:55.008646  688914 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 13:28:55.012559  688914 out.go:201] 
	W0210 13:28:55.013936  688914 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 13:28:55.013983  688914 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0210 13:28:55.014019  688914 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0210 13:28:55.015512  688914 out.go:201] 
	
	
	==> CRI-O <==
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.093900683Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739195061093879530,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ad452f1-1cb7-406e-b231-8a35abf162db name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.094431767Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e150981b-1727-41de-a3f7-030c209669f6 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.094493090Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e150981b-1727-41de-a3f7-030c209669f6 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.094526926Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e150981b-1727-41de-a3f7-030c209669f6 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.123500152Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=726e032d-5167-4042-a324-eaff6b256f7f name=/runtime.v1.RuntimeService/Version
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.123618820Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=726e032d-5167-4042-a324-eaff6b256f7f name=/runtime.v1.RuntimeService/Version
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.124524445Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=772e80b0-0bb0-4c8b-93bd-1ac1964299a1 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.124956082Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739195061124933057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=772e80b0-0bb0-4c8b-93bd-1ac1964299a1 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.125465457Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77d7efb8-9ab2-4a92-9075-cbfeb6dcc0ae name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.125533423Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77d7efb8-9ab2-4a92-9075-cbfeb6dcc0ae name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.125611676Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=77d7efb8-9ab2-4a92-9075-cbfeb6dcc0ae name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.155880778Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=926361cc-9a06-4d7f-9e90-a05aad8cb84f name=/runtime.v1.RuntimeService/Version
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.155968343Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=926361cc-9a06-4d7f-9e90-a05aad8cb84f name=/runtime.v1.RuntimeService/Version
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.157172865Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=487d893d-6c90-4e93-bd2c-451e01264b7a name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.157664074Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739195061157552162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=487d893d-6c90-4e93-bd2c-451e01264b7a name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.158192602Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8976a3b-372a-475d-a58d-d9ba61f0c689 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.158243387Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8976a3b-372a-475d-a58d-d9ba61f0c689 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.158277869Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c8976a3b-372a-475d-a58d-d9ba61f0c689 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.188457358Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a24648ca-fe26-4c11-8678-a88f65e789cc name=/runtime.v1.RuntimeService/Version
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.188556517Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a24648ca-fe26-4c11-8678-a88f65e789cc name=/runtime.v1.RuntimeService/Version
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.189760560Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d10b25ac-e670-4679-a994-ee708b1acc55 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.190122720Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739195061190103963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d10b25ac-e670-4679-a994-ee708b1acc55 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.190726727Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=682e1127-da91-4fa1-b400-71e7d16cbeb8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.190778102Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=682e1127-da91-4fa1-b400-71e7d16cbeb8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:44:21 old-k8s-version-745712 crio[634]: time="2025-02-10 13:44:21.190809838Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=682e1127-da91-4fa1-b400-71e7d16cbeb8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb10 13:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057667] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039973] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.114070] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.167757] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.632628] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.042916] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.063154] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064261] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.151765] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.139010] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.215149] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +6.104183] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.063040] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.778959] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[Feb10 13:21] kauditd_printk_skb: 46 callbacks suppressed
	[Feb10 13:24] systemd-fstab-generator[5077]: Ignoring "noauto" option for root device
	[Feb10 13:26] systemd-fstab-generator[5356]: Ignoring "noauto" option for root device
	[  +0.069372] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:44:21 up 23 min,  0 users,  load average: 0.04, 0.04, 0.00
	Linux old-k8s-version-745712 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 10 13:44:20 old-k8s-version-745712 kubelet[7223]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Feb 10 13:44:20 old-k8s-version-745712 kubelet[7223]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc00017f500, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc0009974d0, 0x24, 0x0, ...)
	Feb 10 13:44:20 old-k8s-version-745712 kubelet[7223]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Feb 10 13:44:20 old-k8s-version-745712 kubelet[7223]: net.(*Dialer).DialContext(0xc000925e00, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0009974d0, 0x24, 0x0, 0x0, 0x0, ...)
	Feb 10 13:44:20 old-k8s-version-745712 kubelet[7223]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Feb 10 13:44:20 old-k8s-version-745712 kubelet[7223]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000c587e0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0009974d0, 0x24, 0x60, 0x7fb1385131f0, 0x118, ...)
	Feb 10 13:44:20 old-k8s-version-745712 kubelet[7223]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Feb 10 13:44:20 old-k8s-version-745712 kubelet[7223]: net/http.(*Transport).dial(0xc000c66000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0009974d0, 0x24, 0x0, 0x0, 0x0, ...)
	Feb 10 13:44:20 old-k8s-version-745712 kubelet[7223]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Feb 10 13:44:20 old-k8s-version-745712 kubelet[7223]: net/http.(*Transport).dialConn(0xc000c66000, 0x4f7fe00, 0xc000052030, 0x0, 0xc000220300, 0x5, 0xc0009974d0, 0x24, 0x0, 0xc000318b40, ...)
	Feb 10 13:44:20 old-k8s-version-745712 kubelet[7223]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Feb 10 13:44:20 old-k8s-version-745712 kubelet[7223]: net/http.(*Transport).dialConnFor(0xc000c66000, 0xc0009eb8c0)
	Feb 10 13:44:20 old-k8s-version-745712 kubelet[7223]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Feb 10 13:44:20 old-k8s-version-745712 kubelet[7223]: created by net/http.(*Transport).queueForDial
	Feb 10 13:44:20 old-k8s-version-745712 kubelet[7223]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Feb 10 13:44:20 old-k8s-version-745712 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 10 13:44:20 old-k8s-version-745712 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 10 13:44:20 old-k8s-version-745712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 181.
	Feb 10 13:44:20 old-k8s-version-745712 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 10 13:44:20 old-k8s-version-745712 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 10 13:44:21 old-k8s-version-745712 kubelet[7255]: I0210 13:44:21.031989    7255 server.go:416] Version: v1.20.0
	Feb 10 13:44:21 old-k8s-version-745712 kubelet[7255]: I0210 13:44:21.032340    7255 server.go:837] Client rotation is on, will bootstrap in background
	Feb 10 13:44:21 old-k8s-version-745712 kubelet[7255]: I0210 13:44:21.035157    7255 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 10 13:44:21 old-k8s-version-745712 kubelet[7255]: I0210 13:44:21.036556    7255 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Feb 10 13:44:21 old-k8s-version-745712 kubelet[7255]: W0210 13:44:21.036749    7255 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-745712 -n old-k8s-version-745712
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-745712 -n old-k8s-version-745712: exit status 2 (242.749346ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-745712" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (383.47s)

                                                
                                    

Test pass (277/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.54
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.1/json-events 3.84
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.07
18 TestDownloadOnly/v1.32.1/DeleteAll 0.14
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.63
22 TestOffline 134.47
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 130.06
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 10.52
35 TestAddons/parallel/Registry 21.05
37 TestAddons/parallel/InspektorGadget 11.68
38 TestAddons/parallel/MetricsServer 5.99
40 TestAddons/parallel/CSI 49.7
41 TestAddons/parallel/Headlamp 19.84
42 TestAddons/parallel/CloudSpanner 5.65
43 TestAddons/parallel/LocalPath 55.31
44 TestAddons/parallel/NvidiaDevicePlugin 6.55
45 TestAddons/parallel/Yakd 10.82
47 TestAddons/StoppedEnableDisable 91.07
48 TestCertOptions 65.02
49 TestCertExpiration 280.93
51 TestForceSystemdFlag 73.17
52 TestForceSystemdEnv 67.03
54 TestKVMDriverInstallOrUpdate 3.91
58 TestErrorSpam/setup 40.64
59 TestErrorSpam/start 0.37
60 TestErrorSpam/status 0.74
61 TestErrorSpam/pause 1.5
62 TestErrorSpam/unpause 1.72
63 TestErrorSpam/stop 4.14
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 59.27
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 38.13
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.23
75 TestFunctional/serial/CacheCmd/cache/add_local 1.89
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.71
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 33.37
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.38
86 TestFunctional/serial/LogsFileCmd 1.39
87 TestFunctional/serial/InvalidService 5.33
89 TestFunctional/parallel/ConfigCmd 0.39
90 TestFunctional/parallel/DashboardCmd 35.76
91 TestFunctional/parallel/DryRun 0.33
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 1.09
97 TestFunctional/parallel/ServiceCmdConnect 10.63
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 48.06
101 TestFunctional/parallel/SSHCmd 0.46
102 TestFunctional/parallel/CpCmd 1.34
103 TestFunctional/parallel/MySQL 21.55
104 TestFunctional/parallel/FileSync 0.3
105 TestFunctional/parallel/CertSync 1.58
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
113 TestFunctional/parallel/License 0.19
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 0.75
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.74
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.14
121 TestFunctional/parallel/ImageCommands/Setup 1.52
122 TestFunctional/parallel/ServiceCmd/DeployApp 10.17
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.62
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.87
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.58
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.04
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.85
139 TestFunctional/parallel/ServiceCmd/List 0.37
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.35
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
142 TestFunctional/parallel/ServiceCmd/Format 0.34
143 TestFunctional/parallel/ServiceCmd/URL 0.36
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
148 TestFunctional/parallel/MountCmd/any-port 8.96
149 TestFunctional/parallel/ProfileCmd/profile_list 0.5
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.62
151 TestFunctional/parallel/MountCmd/specific-port 2.19
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.35
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 186.18
161 TestMultiControlPlane/serial/DeployApp 5.87
162 TestMultiControlPlane/serial/PingHostFromPods 1.16
163 TestMultiControlPlane/serial/AddWorkerNode 54.51
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
166 TestMultiControlPlane/serial/CopyFile 13.18
167 TestMultiControlPlane/serial/StopSecondaryNode 91.64
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.64
169 TestMultiControlPlane/serial/RestartSecondaryNode 48.49
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.85
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 445.73
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.4
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
174 TestMultiControlPlane/serial/StopCluster 272.73
175 TestMultiControlPlane/serial/RestartCluster 99.56
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.63
177 TestMultiControlPlane/serial/AddSecondaryNode 77.96
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
182 TestJSONOutput/start/Command 55.43
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.67
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.59
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 7.36
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.21
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 93.14
214 TestMountStart/serial/StartWithMountFirst 24.82
215 TestMountStart/serial/VerifyMountFirst 0.38
216 TestMountStart/serial/StartWithMountSecond 27.7
217 TestMountStart/serial/VerifyMountSecond 0.39
218 TestMountStart/serial/DeleteFirst 0.72
219 TestMountStart/serial/VerifyMountPostDelete 0.39
220 TestMountStart/serial/Stop 1.28
221 TestMountStart/serial/RestartStopped 22.89
222 TestMountStart/serial/VerifyMountPostStop 0.38
225 TestMultiNode/serial/FreshStart2Nodes 111.16
226 TestMultiNode/serial/DeployApp2Nodes 4.93
227 TestMultiNode/serial/PingHostFrom2Pods 0.78
228 TestMultiNode/serial/AddNode 51.98
229 TestMultiNode/serial/MultiNodeLabels 0.07
230 TestMultiNode/serial/ProfileList 0.56
231 TestMultiNode/serial/CopyFile 7.26
232 TestMultiNode/serial/StopNode 2.25
233 TestMultiNode/serial/StartAfterStop 39.98
234 TestMultiNode/serial/RestartKeepsNodes 340.91
235 TestMultiNode/serial/DeleteNode 2.75
236 TestMultiNode/serial/StopMultiNode 181.69
237 TestMultiNode/serial/RestartMultiNode 117.2
238 TestMultiNode/serial/ValidateNameConflict 44.28
245 TestScheduledStopUnix 113.94
249 TestRunningBinaryUpgrade 213.78
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
255 TestNoKubernetes/serial/StartWithK8s 116.38
256 TestNoKubernetes/serial/StartWithStopK8s 15.12
257 TestNoKubernetes/serial/Start 28.25
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
259 TestNoKubernetes/serial/ProfileList 1.01
260 TestNoKubernetes/serial/Stop 1.3
261 TestNoKubernetes/serial/StartNoArgs 60.73
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
263 TestStoppedBinaryUpgrade/Setup 0.44
264 TestStoppedBinaryUpgrade/Upgrade 149.34
272 TestNetworkPlugins/group/false 6.13
284 TestPause/serial/Start 59.11
285 TestNetworkPlugins/group/auto/Start 81.18
286 TestPause/serial/SecondStartNoReconfiguration 47.76
287 TestStoppedBinaryUpgrade/MinikubeLogs 0.77
288 TestNetworkPlugins/group/kindnet/Start 76.77
289 TestPause/serial/Pause 0.72
290 TestPause/serial/VerifyStatus 0.24
291 TestPause/serial/Unpause 0.6
292 TestPause/serial/PauseAgain 0.73
293 TestPause/serial/DeletePaused 0.99
294 TestPause/serial/VerifyDeletedResources 0.64
295 TestNetworkPlugins/group/calico/Start 84.12
296 TestNetworkPlugins/group/auto/KubeletFlags 0.21
297 TestNetworkPlugins/group/auto/NetCatPod 11.22
298 TestNetworkPlugins/group/auto/DNS 0.14
299 TestNetworkPlugins/group/auto/Localhost 0.15
300 TestNetworkPlugins/group/auto/HairPin 0.12
301 TestNetworkPlugins/group/kindnet/ControllerPod 6
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
303 TestNetworkPlugins/group/kindnet/NetCatPod 11.28
304 TestNetworkPlugins/group/custom-flannel/Start 73.19
305 TestNetworkPlugins/group/kindnet/DNS 0.15
306 TestNetworkPlugins/group/kindnet/Localhost 0.13
307 TestNetworkPlugins/group/kindnet/HairPin 0.12
308 TestNetworkPlugins/group/enable-default-cni/Start 84.99
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.22
311 TestNetworkPlugins/group/calico/NetCatPod 10.23
312 TestNetworkPlugins/group/calico/DNS 0.15
313 TestNetworkPlugins/group/calico/Localhost 0.14
314 TestNetworkPlugins/group/calico/HairPin 0.14
315 TestNetworkPlugins/group/flannel/Start 91.04
316 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
317 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.29
318 TestNetworkPlugins/group/custom-flannel/DNS 0.29
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
321 TestNetworkPlugins/group/bridge/Start 69.52
322 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
323 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.24
324 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
325 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
326 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
329 TestNetworkPlugins/group/flannel/ControllerPod 6.01
330 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
331 TestNetworkPlugins/group/flannel/NetCatPod 13.27
332 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
333 TestNetworkPlugins/group/bridge/NetCatPod 10.25
334 TestNetworkPlugins/group/flannel/DNS 0.14
335 TestNetworkPlugins/group/flannel/Localhost 0.11
336 TestNetworkPlugins/group/flannel/HairPin 0.11
337 TestNetworkPlugins/group/bridge/DNS 0.13
338 TestNetworkPlugins/group/bridge/Localhost 0.11
339 TestNetworkPlugins/group/bridge/HairPin 0.11
341 TestStartStop/group/no-preload/serial/FirstStart 73.64
343 TestStartStop/group/embed-certs/serial/FirstStart 109.04
344 TestStartStop/group/no-preload/serial/DeployApp 10.3
345 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.98
346 TestStartStop/group/no-preload/serial/Stop 90.81
347 TestStartStop/group/embed-certs/serial/DeployApp 8.26
348 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.93
349 TestStartStop/group/embed-certs/serial/Stop 91.21
350 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
351 TestStartStop/group/no-preload/serial/SecondStart 314.38
354 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
355 TestStartStop/group/embed-certs/serial/SecondStart 298.9
357 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.41
358 TestStartStop/group/old-k8s-version/serial/Stop 2.52
359 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.94
361 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
362 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.03
363 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.02
364 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
365 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 304.4
366 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
367 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.09
368 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
369 TestStartStop/group/no-preload/serial/Pause 2.98
371 TestStartStop/group/newest-cni/serial/FirstStart 45.1
372 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
373 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
374 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
375 TestStartStop/group/embed-certs/serial/Pause 2.61
376 TestStartStop/group/newest-cni/serial/DeployApp 0
377 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.12
378 TestStartStop/group/newest-cni/serial/Stop 7.39
379 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
380 TestStartStop/group/newest-cni/serial/SecondStart 36.61
381 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
382 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
384 TestStartStop/group/newest-cni/serial/Pause 2.65
385 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
386 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
387 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
388 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.46
x
+
TestDownloadOnly/v1.20.0/json-events (8.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-152629 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-152629 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.544311059s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0210 12:05:20.204945  632352 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0210 12:05:20.205064  632352 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-152629
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-152629: exit status 85 (66.824841ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-152629 | jenkins | v1.35.0 | 10 Feb 25 12:05 UTC |          |
	|         | -p download-only-152629        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 12:05:11
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 12:05:11.704198  632364 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:05:11.704299  632364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:05:11.704307  632364 out.go:358] Setting ErrFile to fd 2...
	I0210 12:05:11.704312  632364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:05:11.704511  632364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
	W0210 12:05:11.704648  632364 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20383-625153/.minikube/config/config.json: open /home/jenkins/minikube-integration/20383-625153/.minikube/config/config.json: no such file or directory
	I0210 12:05:11.705247  632364 out.go:352] Setting JSON to true
	I0210 12:05:11.706163  632364 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":13662,"bootTime":1739175450,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 12:05:11.706286  632364 start.go:139] virtualization: kvm guest
	I0210 12:05:11.708647  632364 out.go:97] [download-only-152629] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 12:05:11.708828  632364 notify.go:220] Checking for updates...
	W0210 12:05:11.708763  632364 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball: no such file or directory
	I0210 12:05:11.710075  632364 out.go:169] MINIKUBE_LOCATION=20383
	I0210 12:05:11.711293  632364 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 12:05:11.712474  632364 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 12:05:11.713529  632364 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	I0210 12:05:11.714680  632364 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0210 12:05:11.716708  632364 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0210 12:05:11.716916  632364 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 12:05:11.752638  632364 out.go:97] Using the kvm2 driver based on user configuration
	I0210 12:05:11.752664  632364 start.go:297] selected driver: kvm2
	I0210 12:05:11.752671  632364 start.go:901] validating driver "kvm2" against <nil>
	I0210 12:05:11.753026  632364 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 12:05:11.753122  632364 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20383-625153/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 12:05:11.768588  632364 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 12:05:11.768650  632364 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 12:05:11.769198  632364 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0210 12:05:11.769340  632364 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0210 12:05:11.769371  632364 cni.go:84] Creating CNI manager for ""
	I0210 12:05:11.769437  632364 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 12:05:11.769449  632364 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0210 12:05:11.769498  632364 start.go:340] cluster config:
	{Name:download-only-152629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-152629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:05:11.769669  632364 iso.go:125] acquiring lock: {Name:mk013d189757e85c0699014f5ef29205d9d4927f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 12:05:11.771406  632364 out.go:97] Downloading VM boot image ...
	I0210 12:05:11.771442  632364 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20383-625153/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0210 12:05:14.316412  632364 out.go:97] Starting "download-only-152629" primary control-plane node in "download-only-152629" cluster
	I0210 12:05:14.316452  632364 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 12:05:14.339315  632364 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0210 12:05:14.339363  632364 cache.go:56] Caching tarball of preloaded images
	I0210 12:05:14.339560  632364 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 12:05:14.341289  632364 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0210 12:05:14.341306  632364 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0210 12:05:14.369176  632364 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-152629 host does not exist
	  To start a cluster, run: "minikube start -p download-only-152629"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-152629
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (3.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-131804 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-131804 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.836255526s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (3.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0210 12:05:24.386760  632352 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0210 12:05:24.386807  632352 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20383-625153/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-131804
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-131804: exit status 85 (68.675839ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-152629 | jenkins | v1.35.0 | 10 Feb 25 12:05 UTC |                     |
	|         | -p download-only-152629        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 10 Feb 25 12:05 UTC | 10 Feb 25 12:05 UTC |
	| delete  | -p download-only-152629        | download-only-152629 | jenkins | v1.35.0 | 10 Feb 25 12:05 UTC | 10 Feb 25 12:05 UTC |
	| start   | -o=json --download-only        | download-only-131804 | jenkins | v1.35.0 | 10 Feb 25 12:05 UTC |                     |
	|         | -p download-only-131804        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 12:05:20
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 12:05:20.593896  632555 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:05:20.594054  632555 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:05:20.594067  632555 out.go:358] Setting ErrFile to fd 2...
	I0210 12:05:20.594074  632555 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:05:20.594280  632555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
	I0210 12:05:20.594849  632555 out.go:352] Setting JSON to true
	I0210 12:05:20.595745  632555 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":13671,"bootTime":1739175450,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 12:05:20.595849  632555 start.go:139] virtualization: kvm guest
	I0210 12:05:20.598104  632555 out.go:97] [download-only-131804] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 12:05:20.598280  632555 notify.go:220] Checking for updates...
	I0210 12:05:20.599681  632555 out.go:169] MINIKUBE_LOCATION=20383
	I0210 12:05:20.601132  632555 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 12:05:20.602759  632555 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 12:05:20.604379  632555 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	I0210 12:05:20.605815  632555 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-131804 host does not exist
	  To start a cluster, run: "minikube start -p download-only-131804"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-131804
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I0210 12:05:24.991384  632352 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-795954 --alsologtostderr --binary-mirror http://127.0.0.1:41035 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-795954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-795954
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (134.47s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-106082 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-106082 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m13.385937909s)
helpers_test.go:175: Cleaning up "offline-crio-106082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-106082
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-106082: (1.080934817s)
--- PASS: TestOffline (134.47s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-234038
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-234038: exit status 85 (55.569128ms)

                                                
                                                
-- stdout --
	* Profile "addons-234038" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-234038"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-234038
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-234038: exit status 85 (57.766328ms)

                                                
                                                
-- stdout --
	* Profile "addons-234038" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-234038"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (130.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-234038 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-234038 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m10.055752864s)
--- PASS: TestAddons/Setup (130.06s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-234038 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-234038 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-234038 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-234038 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [530c1b30-7cd8-4330-8f5a-bc8389728c98] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [530c1b30-7cd8-4330-8f5a-bc8389728c98] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003249575s
addons_test.go:633: (dbg) Run:  kubectl --context addons-234038 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-234038 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-234038 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (21.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 7.622403ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-8ks2s" [0ef45a11-4943-40d8-afeb-bfaa998618ef] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003987102s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-wj8c2" [ebbd9a4b-1ff2-4667-840f-d09153bb86fb] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003159438s
addons_test.go:331: (dbg) Run:  kubectl --context addons-234038 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-234038 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-234038 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.262477027s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-234038 ip
2025/02/10 12:08:15 [DEBUG] GET http://192.168.39.247:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-234038 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (21.05s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.68s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ms8k6" [a7c1fd97-b1b8-4d9e-a5c6-e735eb545472] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005714699s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-234038 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-234038 addons disable inspektor-gadget --alsologtostderr -v=1: (5.677785236s)
--- PASS: TestAddons/parallel/InspektorGadget (11.68s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.99s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 9.130382ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
I0210 12:07:54.977605  632352 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0210 12:07:54.977637  632352 kapi.go:107] duration metric: took 12.0613ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "metrics-server-7fbb699795-flrqb" [de55d6ce-d3c9-49b5-8f24-e8d71b30fbf5] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006713997s
addons_test.go:402: (dbg) Run:  kubectl --context addons-234038 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-234038 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.99s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 12.07394ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-234038 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-234038 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-234038 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-234038 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-234038 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-234038 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-234038 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-234038 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-234038 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-234038 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-234038 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-234038 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d7acc6f8-a31e-4dc8-9463-71b0af9c8ca0] Pending
helpers_test.go:344: "task-pv-pod" [d7acc6f8-a31e-4dc8-9463-71b0af9c8ca0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d7acc6f8-a31e-4dc8-9463-71b0af9c8ca0] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.007306266s
addons_test.go:511: (dbg) Run:  kubectl --context addons-234038 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-234038 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-234038 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-234038 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-234038 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-234038 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-234038 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-234038 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-234038 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-234038 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-234038 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [ebb0833b-d862-4ad0-a942-2c6577d5ff4b] Pending
helpers_test.go:344: "task-pv-pod-restore" [ebb0833b-d862-4ad0-a942-2c6577d5ff4b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [ebb0833b-d862-4ad0-a942-2c6577d5ff4b] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.0036657s
addons_test.go:553: (dbg) Run:  kubectl --context addons-234038 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-234038 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-234038 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-234038 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-234038 addons disable volumesnapshots --alsologtostderr -v=1: (1.096680505s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-234038 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-234038 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.754514036s)
--- PASS: TestAddons/parallel/CSI (49.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-234038 --alsologtostderr -v=1
I0210 12:07:54.965603  632352 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-ksbxf" [f99c1891-eb6f-444b-aefe-6156ad423557] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-ksbxf" [f99c1891-eb6f-444b-aefe-6156ad423557] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-ksbxf" [f99c1891-eb6f-444b-aefe-6156ad423557] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.003248796s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-234038 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-234038 addons disable headlamp --alsologtostderr -v=1: (5.944340092s)
--- PASS: TestAddons/parallel/Headlamp (19.84s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-67zmh" [7c34baa1-7d1e-4bbb-a574-a8ca04e1f455] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004128854s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-234038 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.31s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-234038 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-234038 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-234038 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-234038 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-234038 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-234038 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-234038 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-234038 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-234038 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [02e07058-2149-4cdf-b8b9-73deefd4ce08] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [02e07058-2149-4cdf-b8b9-73deefd4ce08] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [02e07058-2149-4cdf-b8b9-73deefd4ce08] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003392458s
addons_test.go:906: (dbg) Run:  kubectl --context addons-234038 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-234038 ssh "cat /opt/local-path-provisioner/pvc-5a8361a7-be5f-41c0-89c9-8e967fbf6923_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-234038 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-234038 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-234038 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-234038 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.431680581s)
--- PASS: TestAddons/parallel/LocalPath (55.31s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-g7hmw" [74339db8-20ac-4fe3-b340-5d62da5d4a05] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00378868s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-234038 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-z6sm7" [7e624961-f712-4721-a9b9-f4d9cff636f0] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004492748s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-234038 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-234038 addons disable yakd --alsologtostderr -v=1: (5.8121079s)
--- PASS: TestAddons/parallel/Yakd (10.82s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.07s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-234038
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-234038: (1m30.770201372s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-234038
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-234038
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-234038
--- PASS: TestAddons/StoppedEnableDisable (91.07s)

                                                
                                    
x
+
TestCertOptions (65.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-315999 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-315999 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m3.765464759s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-315999 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-315999 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-315999 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-315999" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-315999
--- PASS: TestCertOptions (65.02s)

                                                
                                    
x
+
TestCertExpiration (280.93s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-241180 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-241180 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (41.494500224s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-241180 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-241180 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (58.533573088s)
helpers_test.go:175: Cleaning up "cert-expiration-241180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-241180
--- PASS: TestCertExpiration (280.93s)

                                                
                                    
x
+
TestForceSystemdFlag (73.17s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-485653 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0210 13:05:46.486085  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-485653 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m11.931781759s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-485653 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-485653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-485653
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-485653: (1.030006813s)
--- PASS: TestForceSystemdFlag (73.17s)

                                                
                                    
x
+
TestForceSystemdEnv (67.03s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-171047 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-171047 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m6.028496791s)
helpers_test.go:175: Cleaning up "force-systemd-env-171047" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-171047
--- PASS: TestForceSystemdEnv (67.03s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.91s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0210 13:09:17.805324  632352 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0210 13:09:17.805525  632352 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0210 13:09:17.848909  632352 install.go:62] docker-machine-driver-kvm2: exit status 1
W0210 13:09:17.849461  632352 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0210 13:09:17.849558  632352 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3276625469/001/docker-machine-driver-kvm2
I0210 13:09:18.065308  632352 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3276625469/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0] Decompressors:map[bz2:0xc000788488 gz:0xc000788550 tar:0xc0007884d0 tar.bz2:0xc000788500 tar.gz:0xc000788520 tar.xz:0xc000788530 tar.zst:0xc000788540 tbz2:0xc000788500 tgz:0xc000788520 txz:0xc000788530 tzst:0xc000788540 xz:0xc000788558 zip:0xc000788560 zst:0xc000788570] Getters:map[file:0xc00075e4e0 http:0xc00085a0f0 https:0xc00085a140] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0210 13:09:18.065374  632352 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3276625469/001/docker-machine-driver-kvm2
I0210 13:09:19.971171  632352 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0210 13:09:19.971282  632352 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0210 13:09:20.003117  632352 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0210 13:09:20.003157  632352 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0210 13:09:20.003235  632352 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0210 13:09:20.003270  632352 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3276625469/002/docker-machine-driver-kvm2
I0210 13:09:20.061806  632352 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3276625469/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0] Decompressors:map[bz2:0xc000788488 gz:0xc000788550 tar:0xc0007884d0 tar.bz2:0xc000788500 tar.gz:0xc000788520 tar.xz:0xc000788530 tar.zst:0xc000788540 tbz2:0xc000788500 tgz:0xc000788520 txz:0xc000788530 tzst:0xc000788540 xz:0xc000788558 zip:0xc000788560 zst:0xc000788570] Getters:map[file:0xc0007786a0 http:0xc0007ff090 https:0xc0007ff0e0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0210 13:09:20.061863  632352 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3276625469/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.91s)

                                                
                                    
x
+
TestErrorSpam/setup (40.64s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-140404 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-140404 --driver=kvm2  --container-runtime=crio
E0210 12:12:36.343491  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:12:36.349883  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:12:36.361200  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:12:36.382550  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:12:36.423974  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:12:36.505493  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:12:36.667030  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:12:36.988767  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:12:37.630891  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:12:38.912528  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:12:41.475496  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:12:46.597721  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:12:56.839461  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-140404 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-140404 --driver=kvm2  --container-runtime=crio: (40.638824649s)
--- PASS: TestErrorSpam/setup (40.64s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140404 --log_dir /tmp/nospam-140404 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140404 --log_dir /tmp/nospam-140404 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140404 --log_dir /tmp/nospam-140404 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140404 --log_dir /tmp/nospam-140404 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140404 --log_dir /tmp/nospam-140404 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140404 --log_dir /tmp/nospam-140404 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140404 --log_dir /tmp/nospam-140404 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140404 --log_dir /tmp/nospam-140404 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140404 --log_dir /tmp/nospam-140404 pause
--- PASS: TestErrorSpam/pause (1.50s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140404 --log_dir /tmp/nospam-140404 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140404 --log_dir /tmp/nospam-140404 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140404 --log_dir /tmp/nospam-140404 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (4.14s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140404 --log_dir /tmp/nospam-140404 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-140404 --log_dir /tmp/nospam-140404 stop: (1.611386943s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140404 --log_dir /tmp/nospam-140404 stop
E0210 12:13:17.320941  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-140404 --log_dir /tmp/nospam-140404 stop: (1.153072887s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140404 --log_dir /tmp/nospam-140404 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-140404 --log_dir /tmp/nospam-140404 stop: (1.380021853s)
--- PASS: TestErrorSpam/stop (4.14s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20383-625153/.minikube/files/etc/test/nested/copy/632352/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-653300 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0210 12:13:58.283505  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-653300 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (59.271056658s)
--- PASS: TestFunctional/serial/StartWithProxy (59.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.13s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0210 12:14:18.740597  632352 config.go:182] Loaded profile config "functional-653300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-653300 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-653300 --alsologtostderr -v=8: (38.131947501s)
functional_test.go:680: soft start took 38.13277429s for "functional-653300" cluster.
I0210 12:14:56.872942  632352 config.go:182] Loaded profile config "functional-653300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (38.13s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-653300 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-653300 cache add registry.k8s.io/pause:3.1: (1.035457311s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-653300 cache add registry.k8s.io/pause:3.3: (1.105888786s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-653300 cache add registry.k8s.io/pause:latest: (1.090452982s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-653300 /tmp/TestFunctionalserialCacheCmdcacheadd_local2976806931/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 cache add minikube-local-cache-test:functional-653300
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-653300 cache add minikube-local-cache-test:functional-653300: (1.579762144s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 cache delete minikube-local-cache-test:functional-653300
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-653300
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-653300 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (228.708905ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 kubectl -- --context functional-653300 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-653300 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.37s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-653300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0210 12:15:20.205286  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-653300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.37283546s)
functional_test.go:778: restart took 33.372989193s for "functional-653300" cluster.
I0210 12:15:37.870439  632352 config.go:182] Loaded profile config "functional-653300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (33.37s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-653300 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-653300 logs: (1.379557854s)
--- PASS: TestFunctional/serial/LogsCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 logs --file /tmp/TestFunctionalserialLogsFileCmd1677195395/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-653300 logs --file /tmp/TestFunctionalserialLogsFileCmd1677195395/001/logs.txt: (1.385788246s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.33s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-653300 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-653300
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-653300: exit status 115 (299.245793ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.60:32196 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-653300 delete -f testdata/invalidsvc.yaml
functional_test.go:2344: (dbg) Done: kubectl --context functional-653300 delete -f testdata/invalidsvc.yaml: (1.83797666s)
--- PASS: TestFunctional/serial/InvalidService (5.33s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-653300 config get cpus: exit status 14 (61.470855ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-653300 config get cpus: exit status 14 (59.44265ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (35.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-653300 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-653300 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 640347: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (35.76s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-653300 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-653300 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (177.264041ms)

                                                
                                                
-- stdout --
	* [functional-653300] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20383
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 12:15:59.577491  640077 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:15:59.577783  640077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:15:59.577820  640077 out.go:358] Setting ErrFile to fd 2...
	I0210 12:15:59.577837  640077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:15:59.578159  640077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
	I0210 12:15:59.579097  640077 out.go:352] Setting JSON to false
	I0210 12:15:59.580714  640077 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":14310,"bootTime":1739175450,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 12:15:59.580911  640077 start.go:139] virtualization: kvm guest
	I0210 12:15:59.583676  640077 out.go:177] * [functional-653300] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 12:15:59.585282  640077 notify.go:220] Checking for updates...
	I0210 12:15:59.585357  640077 out.go:177]   - MINIKUBE_LOCATION=20383
	I0210 12:15:59.586752  640077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 12:15:59.588522  640077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 12:15:59.594401  640077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	I0210 12:15:59.596042  640077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 12:15:59.597505  640077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 12:15:59.599542  640077 config.go:182] Loaded profile config "functional-653300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 12:15:59.600230  640077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:15:59.600326  640077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:15:59.619745  640077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33005
	I0210 12:15:59.620112  640077 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:15:59.620709  640077 main.go:141] libmachine: Using API Version  1
	I0210 12:15:59.620745  640077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:15:59.621163  640077 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:15:59.621419  640077 main.go:141] libmachine: (functional-653300) Calling .DriverName
	I0210 12:15:59.621701  640077 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 12:15:59.622124  640077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:15:59.622169  640077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:15:59.640439  640077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43945
	I0210 12:15:59.640994  640077 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:15:59.641482  640077 main.go:141] libmachine: Using API Version  1
	I0210 12:15:59.641500  640077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:15:59.641874  640077 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:15:59.642162  640077 main.go:141] libmachine: (functional-653300) Calling .DriverName
	I0210 12:15:59.686098  640077 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 12:15:59.687506  640077 start.go:297] selected driver: kvm2
	I0210 12:15:59.687528  640077 start.go:901] validating driver "kvm2" against &{Name:functional-653300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-653300 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.60 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:15:59.687648  640077 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 12:15:59.689862  640077 out.go:201] 
	W0210 12:15:59.691308  640077 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0210 12:15:59.692609  640077 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-653300 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-653300 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-653300 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (160.666474ms)

                                                
                                                
-- stdout --
	* [functional-653300] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20383
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 12:15:59.903929  640216 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:15:59.904112  640216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:15:59.904133  640216 out.go:358] Setting ErrFile to fd 2...
	I0210 12:15:59.904140  640216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:15:59.904423  640216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
	I0210 12:15:59.904989  640216 out.go:352] Setting JSON to false
	I0210 12:15:59.906239  640216 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":14310,"bootTime":1739175450,"procs":252,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 12:15:59.906324  640216 start.go:139] virtualization: kvm guest
	I0210 12:15:59.910516  640216 out.go:177] * [functional-653300] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0210 12:15:59.912220  640216 notify.go:220] Checking for updates...
	I0210 12:15:59.912241  640216 out.go:177]   - MINIKUBE_LOCATION=20383
	I0210 12:15:59.913628  640216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 12:15:59.914983  640216 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 12:15:59.916195  640216 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	I0210 12:15:59.917461  640216 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 12:15:59.918809  640216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 12:15:59.920762  640216 config.go:182] Loaded profile config "functional-653300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 12:15:59.921420  640216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:15:59.921508  640216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:15:59.939342  640216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44295
	I0210 12:15:59.939865  640216 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:15:59.940472  640216 main.go:141] libmachine: Using API Version  1
	I0210 12:15:59.940496  640216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:15:59.940886  640216 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:15:59.941052  640216 main.go:141] libmachine: (functional-653300) Calling .DriverName
	I0210 12:15:59.941406  640216 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 12:15:59.941867  640216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:15:59.941925  640216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:15:59.958937  640216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40361
	I0210 12:15:59.959494  640216 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:15:59.960160  640216 main.go:141] libmachine: Using API Version  1
	I0210 12:15:59.960188  640216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:15:59.960539  640216 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:15:59.960746  640216 main.go:141] libmachine: (functional-653300) Calling .DriverName
	I0210 12:15:59.997320  640216 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0210 12:15:59.998570  640216 start.go:297] selected driver: kvm2
	I0210 12:15:59.998587  640216 start.go:901] validating driver "kvm2" against &{Name:functional-653300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-653300 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.60 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:15:59.998735  640216 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 12:16:00.001275  640216 out.go:201] 
	W0210 12:16:00.002630  640216 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0210 12:16:00.003930  640216 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-653300 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-653300 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-vrgz4" [0182e14d-1239-4970-83e8-3b93f69923b9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-vrgz4" [0182e14d-1239-4970-83e8-3b93f69923b9] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.005900218s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.50.60:31490
functional_test.go:1692: http://192.168.50.60:31490: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-vrgz4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.60:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.60:31490
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [fbd45263-a17d-4c57-ae73-ea3b7864f2a1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003873757s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-653300 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-653300 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-653300 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-653300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2e619466-bbaf-410a-9b1f-228b4cb63318] Pending
helpers_test.go:344: "sp-pod" [2e619466-bbaf-410a-9b1f-228b4cb63318] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2e619466-bbaf-410a-9b1f-228b4cb63318] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.002712316s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-653300 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-653300 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-653300 delete -f testdata/storage-provisioner/pod.yaml: (8.17439603s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-653300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5b685934-ac73-409a-8571-b658c9ef6657] Pending
helpers_test.go:344: "sp-pod" [5b685934-ac73-409a-8571-b658c9ef6657] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5b685934-ac73-409a-8571-b658c9ef6657] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.003708509s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-653300 exec sp-pod -- ls /tmp/mount
2025/02/10 12:16:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.06s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh -n functional-653300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 cp functional-653300:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1467771473/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh -n functional-653300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh -n functional-653300 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-653300 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-schhx" [9d6b2c8d-39c9-493f-86d8-13d2fbac5980] Pending
helpers_test.go:344: "mysql-58ccfd96bb-schhx" [9d6b2c8d-39c9-493f-86d8-13d2fbac5980] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-schhx" [9d6b2c8d-39c9-493f-86d8-13d2fbac5980] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.071626367s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-653300 exec mysql-58ccfd96bb-schhx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.55s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/632352/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh "sudo cat /etc/test/nested/copy/632352/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/632352.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh "sudo cat /etc/ssl/certs/632352.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/632352.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh "sudo cat /usr/share/ca-certificates/632352.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/6323522.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh "sudo cat /etc/ssl/certs/6323522.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/6323522.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh "sudo cat /usr/share/ca-certificates/6323522.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-653300 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-653300 ssh "sudo systemctl is-active docker": exit status 1 (244.627174ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-653300 ssh "sudo systemctl is-active containerd": exit status 1 (237.641093ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-653300 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/my-image                      | functional-653300  | 57cbf670f5767 | 1.47MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 95c0bda56fc4d | 98.1MB |
| registry.k8s.io/kube-scheduler          | v1.32.1            | 2b0d6572d062c | 70.6MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 019ee182b58e2 | 90.8MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/minikube-local-cache-test     | functional-653300  | 26dc851a7ed4e | 3.33kB |
| docker.io/library/nginx                 | latest             | 97662d24417b3 | 196MB  |
| localhost/kicbase/echo-server           | functional-653300  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-proxy              | v1.32.1            | e29f9c7391fd9 | 95.3MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-653300 image ls --format table --alsologtostderr:
I0210 12:16:19.415063  641065 out.go:345] Setting OutFile to fd 1 ...
I0210 12:16:19.415226  641065 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:16:19.415238  641065 out.go:358] Setting ErrFile to fd 2...
I0210 12:16:19.415244  641065 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:16:19.415450  641065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
I0210 12:16:19.416043  641065 config.go:182] Loaded profile config "functional-653300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 12:16:19.416179  641065 config.go:182] Loaded profile config "functional-653300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 12:16:19.416530  641065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 12:16:19.416590  641065 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 12:16:19.431937  641065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39479
I0210 12:16:19.432332  641065 main.go:141] libmachine: () Calling .GetVersion
I0210 12:16:19.432898  641065 main.go:141] libmachine: Using API Version  1
I0210 12:16:19.432922  641065 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 12:16:19.433258  641065 main.go:141] libmachine: () Calling .GetMachineName
I0210 12:16:19.433448  641065 main.go:141] libmachine: (functional-653300) Calling .GetState
I0210 12:16:19.435194  641065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 12:16:19.435242  641065 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 12:16:19.449789  641065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46335
I0210 12:16:19.450210  641065 main.go:141] libmachine: () Calling .GetVersion
I0210 12:16:19.450675  641065 main.go:141] libmachine: Using API Version  1
I0210 12:16:19.450696  641065 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 12:16:19.451010  641065 main.go:141] libmachine: () Calling .GetMachineName
I0210 12:16:19.451217  641065 main.go:141] libmachine: (functional-653300) Calling .DriverName
I0210 12:16:19.451418  641065 ssh_runner.go:195] Run: systemctl --version
I0210 12:16:19.451456  641065 main.go:141] libmachine: (functional-653300) Calling .GetSSHHostname
I0210 12:16:19.453873  641065 main.go:141] libmachine: (functional-653300) DBG | domain functional-653300 has defined MAC address 52:54:00:09:ba:a1 in network mk-functional-653300
I0210 12:16:19.454287  641065 main.go:141] libmachine: (functional-653300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ba:a1", ip: ""} in network mk-functional-653300: {Iface:virbr1 ExpiryTime:2025-02-10 13:13:34 +0000 UTC Type:0 Mac:52:54:00:09:ba:a1 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:functional-653300 Clientid:01:52:54:00:09:ba:a1}
I0210 12:16:19.454323  641065 main.go:141] libmachine: (functional-653300) DBG | domain functional-653300 has defined IP address 192.168.50.60 and MAC address 52:54:00:09:ba:a1 in network mk-functional-653300
I0210 12:16:19.454432  641065 main.go:141] libmachine: (functional-653300) Calling .GetSSHPort
I0210 12:16:19.454603  641065 main.go:141] libmachine: (functional-653300) Calling .GetSSHKeyPath
I0210 12:16:19.454723  641065 main.go:141] libmachine: (functional-653300) Calling .GetSSHUsername
I0210 12:16:19.454850  641065 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/functional-653300/id_rsa Username:docker}
I0210 12:16:19.539668  641065 ssh_runner.go:195] Run: sudo crictl images --output json
I0210 12:16:19.577440  641065 main.go:141] libmachine: Making call to close driver server
I0210 12:16:19.577460  641065 main.go:141] libmachine: (functional-653300) Calling .Close
I0210 12:16:19.577731  641065 main.go:141] libmachine: Successfully made call to close driver server
I0210 12:16:19.577750  641065 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 12:16:19.577765  641065 main.go:141] libmachine: Making call to close driver server
I0210 12:16:19.577767  641065 main.go:141] libmachine: (functional-653300) DBG | Closing plugin on server side
I0210 12:16:19.577774  641065 main.go:141] libmachine: (functional-653300) Calling .Close
I0210 12:16:19.577990  641065 main.go:141] libmachine: Successfully made call to close driver server
I0210 12:16:19.578004  641065 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-653300 image ls --format json --alsologtostderr:
[{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"98051552"},{"id":"e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da0732
2bcaa62263c403ef69a8"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"95271321"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"26dc851a7ed4e4d2942a0494b2de84ff996d5842262a932318041f062bc20eb0","repoDigests":["localhost/minikube-local-cache-test@sha256:9e9b4949dec69c163de0d354bae606cab4b3cf95a7b09fb4d3944a8d8c11c706"],"repoTags":["localhost/minikube-local-cache-test:functional-653300"],"size":"3330"},{"id":"c5b2199a75d7aae79e269196c5b16c3ea84be2e61df4e4c806051459a9e2e9dd","repoDigests":["docker.io/library/401ef4b273b224a6f395faf7c99ae03256148195b902fb14ba572c304cc23111-tmp@sha256:725a96d25e3a95cdb059b0a5c09c4f162cab0223f286932c8300c027a2d3a00d"],"repoTags":[],"size":"1466018"},{"id":
"97662d24417b316f60607afbca9f226a2ba58f09d642f27b8e197a89859ddc8e","repoDigests":["docker.io/library/nginx@sha256:088eea90c3d0a540ee5686e7d7471acbd4063b6e97eaf49b5e651665eb7f4dc7","docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34"],"repoTags":["docker.io/library/nginx:latest"],"size":"196149140"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f3
8f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-653300"],"size":"4943877"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"94963761"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c
6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba5
8f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954","registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"90793286"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff2
24d305d78a05bac38f72e","registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"70649158"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"57cbf670f5767c6649a9c7789bc0ce26b998fbf2ccbcab08680eb4a702aa6a00","repoDigests":["localhost/my-image@sha256:3c2720d0f358874a30cee40c2855982220ee8e433a12f17388767dce61753b0a"],"repoTags":["localhost/my-image:functional-653300
"],"size":"1468600"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-653300 image ls --format json --alsologtostderr:
I0210 12:16:19.202334  641041 out.go:345] Setting OutFile to fd 1 ...
I0210 12:16:19.202448  641041 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:16:19.202457  641041 out.go:358] Setting ErrFile to fd 2...
I0210 12:16:19.202461  641041 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:16:19.202627  641041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
I0210 12:16:19.203266  641041 config.go:182] Loaded profile config "functional-653300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 12:16:19.203364  641041 config.go:182] Loaded profile config "functional-653300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 12:16:19.203719  641041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 12:16:19.203783  641041 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 12:16:19.219374  641041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38549
I0210 12:16:19.219968  641041 main.go:141] libmachine: () Calling .GetVersion
I0210 12:16:19.220615  641041 main.go:141] libmachine: Using API Version  1
I0210 12:16:19.220653  641041 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 12:16:19.221032  641041 main.go:141] libmachine: () Calling .GetMachineName
I0210 12:16:19.221249  641041 main.go:141] libmachine: (functional-653300) Calling .GetState
I0210 12:16:19.223275  641041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 12:16:19.223329  641041 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 12:16:19.239599  641041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34579
I0210 12:16:19.240070  641041 main.go:141] libmachine: () Calling .GetVersion
I0210 12:16:19.240685  641041 main.go:141] libmachine: Using API Version  1
I0210 12:16:19.240719  641041 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 12:16:19.241022  641041 main.go:141] libmachine: () Calling .GetMachineName
I0210 12:16:19.241301  641041 main.go:141] libmachine: (functional-653300) Calling .DriverName
I0210 12:16:19.241508  641041 ssh_runner.go:195] Run: systemctl --version
I0210 12:16:19.241535  641041 main.go:141] libmachine: (functional-653300) Calling .GetSSHHostname
I0210 12:16:19.244537  641041 main.go:141] libmachine: (functional-653300) DBG | domain functional-653300 has defined MAC address 52:54:00:09:ba:a1 in network mk-functional-653300
I0210 12:16:19.244980  641041 main.go:141] libmachine: (functional-653300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ba:a1", ip: ""} in network mk-functional-653300: {Iface:virbr1 ExpiryTime:2025-02-10 13:13:34 +0000 UTC Type:0 Mac:52:54:00:09:ba:a1 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:functional-653300 Clientid:01:52:54:00:09:ba:a1}
I0210 12:16:19.245015  641041 main.go:141] libmachine: (functional-653300) DBG | domain functional-653300 has defined IP address 192.168.50.60 and MAC address 52:54:00:09:ba:a1 in network mk-functional-653300
I0210 12:16:19.245196  641041 main.go:141] libmachine: (functional-653300) Calling .GetSSHPort
I0210 12:16:19.245397  641041 main.go:141] libmachine: (functional-653300) Calling .GetSSHKeyPath
I0210 12:16:19.245583  641041 main.go:141] libmachine: (functional-653300) Calling .GetSSHUsername
I0210 12:16:19.245728  641041 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/functional-653300/id_rsa Username:docker}
I0210 12:16:19.327883  641041 ssh_runner.go:195] Run: sudo crictl images --output json
I0210 12:16:19.361175  641041 main.go:141] libmachine: Making call to close driver server
I0210 12:16:19.361210  641041 main.go:141] libmachine: (functional-653300) Calling .Close
I0210 12:16:19.361499  641041 main.go:141] libmachine: (functional-653300) DBG | Closing plugin on server side
I0210 12:16:19.361541  641041 main.go:141] libmachine: Successfully made call to close driver server
I0210 12:16:19.361557  641041 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 12:16:19.361571  641041 main.go:141] libmachine: Making call to close driver server
I0210 12:16:19.361583  641041 main.go:141] libmachine: (functional-653300) Calling .Close
I0210 12:16:19.361813  641041 main.go:141] libmachine: (functional-653300) DBG | Closing plugin on server side
I0210 12:16:19.361847  641041 main.go:141] libmachine: Successfully made call to close driver server
I0210 12:16:19.361859  641041 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-653300 image ls --format yaml --alsologtostderr:
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-653300
size: "4943877"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
- registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "90793286"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
- registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "70649158"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "98051552"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 97662d24417b316f60607afbca9f226a2ba58f09d642f27b8e197a89859ddc8e
repoDigests:
- docker.io/library/nginx@sha256:088eea90c3d0a540ee5686e7d7471acbd4063b6e97eaf49b5e651665eb7f4dc7
- docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34
repoTags:
- docker.io/library/nginx:latest
size: "196149140"
- id: 26dc851a7ed4e4d2942a0494b2de84ff996d5842262a932318041f062bc20eb0
repoDigests:
- localhost/minikube-local-cache-test@sha256:9e9b4949dec69c163de0d354bae606cab4b3cf95a7b09fb4d3944a8d8c11c706
repoTags:
- localhost/minikube-local-cache-test:functional-653300
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "95271321"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-653300 image ls --format yaml --alsologtostderr:
I0210 12:16:14.330944  640914 out.go:345] Setting OutFile to fd 1 ...
I0210 12:16:14.331047  640914 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:16:14.331055  640914 out.go:358] Setting ErrFile to fd 2...
I0210 12:16:14.331059  640914 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:16:14.331264  640914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
I0210 12:16:14.331957  640914 config.go:182] Loaded profile config "functional-653300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 12:16:14.332127  640914 config.go:182] Loaded profile config "functional-653300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 12:16:14.332574  640914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 12:16:14.332629  640914 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 12:16:14.349885  640914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35103
I0210 12:16:14.350501  640914 main.go:141] libmachine: () Calling .GetVersion
I0210 12:16:14.351202  640914 main.go:141] libmachine: Using API Version  1
I0210 12:16:14.351232  640914 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 12:16:14.351635  640914 main.go:141] libmachine: () Calling .GetMachineName
I0210 12:16:14.351873  640914 main.go:141] libmachine: (functional-653300) Calling .GetState
I0210 12:16:14.353979  640914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 12:16:14.354030  640914 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 12:16:14.369486  640914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39979
I0210 12:16:14.369984  640914 main.go:141] libmachine: () Calling .GetVersion
I0210 12:16:14.370503  640914 main.go:141] libmachine: Using API Version  1
I0210 12:16:14.370528  640914 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 12:16:14.370887  640914 main.go:141] libmachine: () Calling .GetMachineName
I0210 12:16:14.371094  640914 main.go:141] libmachine: (functional-653300) Calling .DriverName
I0210 12:16:14.371308  640914 ssh_runner.go:195] Run: systemctl --version
I0210 12:16:14.371333  640914 main.go:141] libmachine: (functional-653300) Calling .GetSSHHostname
I0210 12:16:14.374246  640914 main.go:141] libmachine: (functional-653300) DBG | domain functional-653300 has defined MAC address 52:54:00:09:ba:a1 in network mk-functional-653300
I0210 12:16:14.374754  640914 main.go:141] libmachine: (functional-653300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ba:a1", ip: ""} in network mk-functional-653300: {Iface:virbr1 ExpiryTime:2025-02-10 13:13:34 +0000 UTC Type:0 Mac:52:54:00:09:ba:a1 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:functional-653300 Clientid:01:52:54:00:09:ba:a1}
I0210 12:16:14.374784  640914 main.go:141] libmachine: (functional-653300) DBG | domain functional-653300 has defined IP address 192.168.50.60 and MAC address 52:54:00:09:ba:a1 in network mk-functional-653300
I0210 12:16:14.374852  640914 main.go:141] libmachine: (functional-653300) Calling .GetSSHPort
I0210 12:16:14.375070  640914 main.go:141] libmachine: (functional-653300) Calling .GetSSHKeyPath
I0210 12:16:14.375219  640914 main.go:141] libmachine: (functional-653300) Calling .GetSSHUsername
I0210 12:16:14.375398  640914 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/functional-653300/id_rsa Username:docker}
I0210 12:16:14.484429  640914 ssh_runner.go:195] Run: sudo crictl images --output json
I0210 12:16:15.002449  640914 main.go:141] libmachine: Making call to close driver server
I0210 12:16:15.002476  640914 main.go:141] libmachine: (functional-653300) Calling .Close
I0210 12:16:15.002761  640914 main.go:141] libmachine: Successfully made call to close driver server
I0210 12:16:15.002776  640914 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 12:16:15.002792  640914 main.go:141] libmachine: Making call to close driver server
I0210 12:16:15.002800  640914 main.go:141] libmachine: (functional-653300) Calling .Close
I0210 12:16:15.003068  640914 main.go:141] libmachine: Successfully made call to close driver server
I0210 12:16:15.003089  640914 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 12:16:15.003100  640914 main.go:141] libmachine: (functional-653300) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-653300 ssh pgrep buildkitd: exit status 1 (211.657196ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 image build -t localhost/my-image:functional-653300 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-653300 image build -t localhost/my-image:functional-653300 testdata/build --alsologtostderr: (3.666390627s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-653300 image build -t localhost/my-image:functional-653300 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c5b2199a75d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-653300
--> 57cbf670f57
Successfully tagged localhost/my-image:functional-653300
57cbf670f5767c6649a9c7789bc0ce26b998fbf2ccbcab08680eb4a702aa6a00
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-653300 image build -t localhost/my-image:functional-653300 testdata/build --alsologtostderr:
I0210 12:16:15.283996  640967 out.go:345] Setting OutFile to fd 1 ...
I0210 12:16:15.284369  640967 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:16:15.284384  640967 out.go:358] Setting ErrFile to fd 2...
I0210 12:16:15.284391  640967 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:16:15.284677  640967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
I0210 12:16:15.285586  640967 config.go:182] Loaded profile config "functional-653300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 12:16:15.286185  640967 config.go:182] Loaded profile config "functional-653300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 12:16:15.286633  640967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 12:16:15.286688  640967 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 12:16:15.302927  640967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41975
I0210 12:16:15.303448  640967 main.go:141] libmachine: () Calling .GetVersion
I0210 12:16:15.304058  640967 main.go:141] libmachine: Using API Version  1
I0210 12:16:15.304082  640967 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 12:16:15.304471  640967 main.go:141] libmachine: () Calling .GetMachineName
I0210 12:16:15.304713  640967 main.go:141] libmachine: (functional-653300) Calling .GetState
I0210 12:16:15.306738  640967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 12:16:15.306790  640967 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 12:16:15.322497  640967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34781
I0210 12:16:15.323050  640967 main.go:141] libmachine: () Calling .GetVersion
I0210 12:16:15.323655  640967 main.go:141] libmachine: Using API Version  1
I0210 12:16:15.323677  640967 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 12:16:15.324040  640967 main.go:141] libmachine: () Calling .GetMachineName
I0210 12:16:15.324231  640967 main.go:141] libmachine: (functional-653300) Calling .DriverName
I0210 12:16:15.324471  640967 ssh_runner.go:195] Run: systemctl --version
I0210 12:16:15.324502  640967 main.go:141] libmachine: (functional-653300) Calling .GetSSHHostname
I0210 12:16:15.327539  640967 main.go:141] libmachine: (functional-653300) DBG | domain functional-653300 has defined MAC address 52:54:00:09:ba:a1 in network mk-functional-653300
I0210 12:16:15.327962  640967 main.go:141] libmachine: (functional-653300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ba:a1", ip: ""} in network mk-functional-653300: {Iface:virbr1 ExpiryTime:2025-02-10 13:13:34 +0000 UTC Type:0 Mac:52:54:00:09:ba:a1 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:functional-653300 Clientid:01:52:54:00:09:ba:a1}
I0210 12:16:15.327987  640967 main.go:141] libmachine: (functional-653300) DBG | domain functional-653300 has defined IP address 192.168.50.60 and MAC address 52:54:00:09:ba:a1 in network mk-functional-653300
I0210 12:16:15.328155  640967 main.go:141] libmachine: (functional-653300) Calling .GetSSHPort
I0210 12:16:15.328305  640967 main.go:141] libmachine: (functional-653300) Calling .GetSSHKeyPath
I0210 12:16:15.328469  640967 main.go:141] libmachine: (functional-653300) Calling .GetSSHUsername
I0210 12:16:15.328616  640967 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/functional-653300/id_rsa Username:docker}
I0210 12:16:15.415429  640967 build_images.go:161] Building image from path: /tmp/build.880727725.tar
I0210 12:16:15.415533  640967 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0210 12:16:15.425988  640967 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.880727725.tar
I0210 12:16:15.431261  640967 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.880727725.tar: stat -c "%s %y" /var/lib/minikube/build/build.880727725.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.880727725.tar': No such file or directory
I0210 12:16:15.431298  640967 ssh_runner.go:362] scp /tmp/build.880727725.tar --> /var/lib/minikube/build/build.880727725.tar (3072 bytes)
I0210 12:16:15.458290  640967 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.880727725
I0210 12:16:15.467368  640967 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.880727725 -xf /var/lib/minikube/build/build.880727725.tar
I0210 12:16:15.477187  640967 crio.go:315] Building image: /var/lib/minikube/build/build.880727725
I0210 12:16:15.477279  640967 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-653300 /var/lib/minikube/build/build.880727725 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0210 12:16:18.855872  640967 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-653300 /var/lib/minikube/build/build.880727725 --cgroup-manager=cgroupfs: (3.378543541s)
I0210 12:16:18.855978  640967 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.880727725
I0210 12:16:18.872024  640967 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.880727725.tar
I0210 12:16:18.882720  640967 build_images.go:217] Built localhost/my-image:functional-653300 from /tmp/build.880727725.tar
I0210 12:16:18.882759  640967 build_images.go:133] succeeded building to: functional-653300
I0210 12:16:18.882765  640967 build_images.go:134] failed building to: 
I0210 12:16:18.882843  640967 main.go:141] libmachine: Making call to close driver server
I0210 12:16:18.882865  640967 main.go:141] libmachine: (functional-653300) Calling .Close
I0210 12:16:18.883133  640967 main.go:141] libmachine: Successfully made call to close driver server
I0210 12:16:18.883152  640967 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 12:16:18.883161  640967 main.go:141] libmachine: Making call to close driver server
I0210 12:16:18.883163  640967 main.go:141] libmachine: (functional-653300) DBG | Closing plugin on server side
I0210 12:16:18.883166  640967 main.go:141] libmachine: (functional-653300) Calling .Close
I0210 12:16:18.883452  640967 main.go:141] libmachine: Successfully made call to close driver server
I0210 12:16:18.883467  640967 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.501091489s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-653300
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-653300 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-653300 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-5cwgf" [0e7be314-f3d6-45dc-87b9-d06185e0a197] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-5cwgf" [0e7be314-f3d6-45dc-87b9-d06185e0a197] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.005144671s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 image load --daemon kicbase/echo-server:functional-653300 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-653300 image load --daemon kicbase/echo-server:functional-653300 --alsologtostderr: (3.356510475s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 image load --daemon kicbase/echo-server:functional-653300 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-653300
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 image load --daemon kicbase/echo-server:functional-653300 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 image save kicbase/echo-server:functional-653300 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 image rm kicbase/echo-server:functional-653300 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-653300
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 image save --daemon kicbase/echo-server:functional-653300 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-653300
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 service list -o json
functional_test.go:1511: Took "354.260843ms" to run "out/minikube-linux-amd64 -p functional-653300 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.50.60:30610
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.50.60:30610
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-653300 /tmp/TestFunctionalparallelMountCmdany-port2968928997/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1739189758675480598" to /tmp/TestFunctionalparallelMountCmdany-port2968928997/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1739189758675480598" to /tmp/TestFunctionalparallelMountCmdany-port2968928997/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1739189758675480598" to /tmp/TestFunctionalparallelMountCmdany-port2968928997/001/test-1739189758675480598
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-653300 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (288.224918ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0210 12:15:58.964059  632352 retry.go:31] will retry after 304.627681ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 10 12:15 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 10 12:15 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 10 12:15 test-1739189758675480598
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh cat /mount-9p/test-1739189758675480598
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-653300 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [1f4ba292-1397-4a6e-8e9d-acc953886fe4] Pending
helpers_test.go:344: "busybox-mount" [1f4ba292-1397-4a6e-8e9d-acc953886fe4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [1f4ba292-1397-4a6e-8e9d-acc953886fe4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [1f4ba292-1397-4a6e-8e9d-acc953886fe4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.002796639s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-653300 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-653300 /tmp/TestFunctionalparallelMountCmdany-port2968928997/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.96s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "427.032458ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "75.856979ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "420.58476ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "203.024838ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-653300 /tmp/TestFunctionalparallelMountCmdspecific-port3040791525/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-653300 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (252.282237ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0210 12:16:07.892733  632352 retry.go:31] will retry after 607.812532ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-653300 /tmp/TestFunctionalparallelMountCmdspecific-port3040791525/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-653300 ssh "sudo umount -f /mount-9p": exit status 1 (235.576041ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-653300 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-653300 /tmp/TestFunctionalparallelMountCmdspecific-port3040791525/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-653300 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3600227845/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-653300 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3600227845/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-653300 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3600227845/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-653300 ssh "findmnt -T" /mount1: exit status 1 (281.042995ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0210 12:16:10.110307  632352 retry.go:31] will retry after 353.628615ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-653300 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-653300 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-653300 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3600227845/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-653300 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3600227845/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-653300 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3600227845/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.35s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-653300
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-653300
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-653300
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (186.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-630116 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0210 12:17:36.338602  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:18:04.053272  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-630116 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m5.526776888s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (186.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-630116 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-630116 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-630116 -- rollout status deployment/busybox: (3.766203258s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-630116 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-630116 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-630116 -- exec busybox-58667487b6-5sg4f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-630116 -- exec busybox-58667487b6-jr6nk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-630116 -- exec busybox-58667487b6-lklt5 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-630116 -- exec busybox-58667487b6-5sg4f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-630116 -- exec busybox-58667487b6-jr6nk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-630116 -- exec busybox-58667487b6-lklt5 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-630116 -- exec busybox-58667487b6-5sg4f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-630116 -- exec busybox-58667487b6-jr6nk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-630116 -- exec busybox-58667487b6-lklt5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-630116 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-630116 -- exec busybox-58667487b6-5sg4f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-630116 -- exec busybox-58667487b6-5sg4f -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-630116 -- exec busybox-58667487b6-jr6nk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-630116 -- exec busybox-58667487b6-jr6nk -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-630116 -- exec busybox-58667487b6-lklt5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-630116 -- exec busybox-58667487b6-lklt5 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-630116 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-630116 -v=7 --alsologtostderr: (53.665540247s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-630116 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 status --output json -v=7 --alsologtostderr
E0210 12:20:46.485431  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:20:46.491875  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:20:46.503356  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:20:46.524828  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 cp testdata/cp-test.txt ha-630116:/home/docker/cp-test.txt
E0210 12:20:46.566934  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:20:46.648455  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116 "sudo cat /home/docker/cp-test.txt"
E0210 12:20:46.810569  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 cp ha-630116:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2396680216/001/cp-test_ha-630116.txt
E0210 12:20:47.132193  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 cp ha-630116:/home/docker/cp-test.txt ha-630116-m02:/home/docker/cp-test_ha-630116_ha-630116-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116 "sudo cat /home/docker/cp-test.txt"
E0210 12:20:47.774416  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m02 "sudo cat /home/docker/cp-test_ha-630116_ha-630116-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 cp ha-630116:/home/docker/cp-test.txt ha-630116-m03:/home/docker/cp-test_ha-630116_ha-630116-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m03 "sudo cat /home/docker/cp-test_ha-630116_ha-630116-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 cp ha-630116:/home/docker/cp-test.txt ha-630116-m04:/home/docker/cp-test_ha-630116_ha-630116-m04.txt
E0210 12:20:49.056704  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m04 "sudo cat /home/docker/cp-test_ha-630116_ha-630116-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 cp testdata/cp-test.txt ha-630116-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 cp ha-630116-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2396680216/001/cp-test_ha-630116-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 cp ha-630116-m02:/home/docker/cp-test.txt ha-630116:/home/docker/cp-test_ha-630116-m02_ha-630116.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116 "sudo cat /home/docker/cp-test_ha-630116-m02_ha-630116.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 cp ha-630116-m02:/home/docker/cp-test.txt ha-630116-m03:/home/docker/cp-test_ha-630116-m02_ha-630116-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m02 "sudo cat /home/docker/cp-test.txt"
E0210 12:20:51.618225  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m03 "sudo cat /home/docker/cp-test_ha-630116-m02_ha-630116-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 cp ha-630116-m02:/home/docker/cp-test.txt ha-630116-m04:/home/docker/cp-test_ha-630116-m02_ha-630116-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m04 "sudo cat /home/docker/cp-test_ha-630116-m02_ha-630116-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 cp testdata/cp-test.txt ha-630116-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 cp ha-630116-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2396680216/001/cp-test_ha-630116-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 cp ha-630116-m03:/home/docker/cp-test.txt ha-630116:/home/docker/cp-test_ha-630116-m03_ha-630116.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116 "sudo cat /home/docker/cp-test_ha-630116-m03_ha-630116.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 cp ha-630116-m03:/home/docker/cp-test.txt ha-630116-m02:/home/docker/cp-test_ha-630116-m03_ha-630116-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m02 "sudo cat /home/docker/cp-test_ha-630116-m03_ha-630116-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 cp ha-630116-m03:/home/docker/cp-test.txt ha-630116-m04:/home/docker/cp-test_ha-630116-m03_ha-630116-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m04 "sudo cat /home/docker/cp-test_ha-630116-m03_ha-630116-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 cp testdata/cp-test.txt ha-630116-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 cp ha-630116-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2396680216/001/cp-test_ha-630116-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 cp ha-630116-m04:/home/docker/cp-test.txt ha-630116:/home/docker/cp-test_ha-630116-m04_ha-630116.txt
E0210 12:20:56.740396  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116 "sudo cat /home/docker/cp-test_ha-630116-m04_ha-630116.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 cp ha-630116-m04:/home/docker/cp-test.txt ha-630116-m02:/home/docker/cp-test_ha-630116-m04_ha-630116-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m02 "sudo cat /home/docker/cp-test_ha-630116-m04_ha-630116-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 cp ha-630116-m04:/home/docker/cp-test.txt ha-630116-m03:/home/docker/cp-test_ha-630116-m04_ha-630116-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 ssh -n ha-630116-m03 "sudo cat /home/docker/cp-test_ha-630116-m04_ha-630116-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 node stop m02 -v=7 --alsologtostderr
E0210 12:21:06.982441  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:21:27.463969  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:22:08.426008  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-630116 node stop m02 -v=7 --alsologtostderr: (1m30.997085504s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-630116 status -v=7 --alsologtostderr: exit status 7 (640.876654ms)

                                                
                                                
-- stdout --
	ha-630116
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-630116-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-630116-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-630116-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 12:22:29.952329  646150 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:22:29.952452  646150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:22:29.952460  646150 out.go:358] Setting ErrFile to fd 2...
	I0210 12:22:29.952465  646150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:22:29.952631  646150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
	I0210 12:22:29.952792  646150 out.go:352] Setting JSON to false
	I0210 12:22:29.952820  646150 mustload.go:65] Loading cluster: ha-630116
	I0210 12:22:29.952941  646150 notify.go:220] Checking for updates...
	I0210 12:22:29.953292  646150 config.go:182] Loaded profile config "ha-630116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 12:22:29.953318  646150 status.go:174] checking status of ha-630116 ...
	I0210 12:22:29.953740  646150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:22:29.953787  646150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:22:29.974164  646150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42803
	I0210 12:22:29.974716  646150 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:22:29.975359  646150 main.go:141] libmachine: Using API Version  1
	I0210 12:22:29.975398  646150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:22:29.975751  646150 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:22:29.975937  646150 main.go:141] libmachine: (ha-630116) Calling .GetState
	I0210 12:22:29.977622  646150 status.go:371] ha-630116 host status = "Running" (err=<nil>)
	I0210 12:22:29.977640  646150 host.go:66] Checking if "ha-630116" exists ...
	I0210 12:22:29.977921  646150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:22:29.977958  646150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:22:29.992995  646150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37121
	I0210 12:22:29.993527  646150 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:22:29.994195  646150 main.go:141] libmachine: Using API Version  1
	I0210 12:22:29.994217  646150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:22:29.994690  646150 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:22:29.994874  646150 main.go:141] libmachine: (ha-630116) Calling .GetIP
	I0210 12:22:29.998318  646150 main.go:141] libmachine: (ha-630116) DBG | domain ha-630116 has defined MAC address 52:54:00:0c:41:17 in network mk-ha-630116
	I0210 12:22:29.998831  646150 main.go:141] libmachine: (ha-630116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:17", ip: ""} in network mk-ha-630116: {Iface:virbr1 ExpiryTime:2025-02-10 13:16:51 +0000 UTC Type:0 Mac:52:54:00:0c:41:17 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-630116 Clientid:01:52:54:00:0c:41:17}
	I0210 12:22:29.998864  646150 main.go:141] libmachine: (ha-630116) DBG | domain ha-630116 has defined IP address 192.168.39.2 and MAC address 52:54:00:0c:41:17 in network mk-ha-630116
	I0210 12:22:29.999011  646150 host.go:66] Checking if "ha-630116" exists ...
	I0210 12:22:29.999451  646150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:22:29.999517  646150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:22:30.016623  646150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41453
	I0210 12:22:30.017124  646150 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:22:30.017665  646150 main.go:141] libmachine: Using API Version  1
	I0210 12:22:30.017686  646150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:22:30.017989  646150 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:22:30.018212  646150 main.go:141] libmachine: (ha-630116) Calling .DriverName
	I0210 12:22:30.018401  646150 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 12:22:30.018426  646150 main.go:141] libmachine: (ha-630116) Calling .GetSSHHostname
	I0210 12:22:30.021251  646150 main.go:141] libmachine: (ha-630116) DBG | domain ha-630116 has defined MAC address 52:54:00:0c:41:17 in network mk-ha-630116
	I0210 12:22:30.021733  646150 main.go:141] libmachine: (ha-630116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:17", ip: ""} in network mk-ha-630116: {Iface:virbr1 ExpiryTime:2025-02-10 13:16:51 +0000 UTC Type:0 Mac:52:54:00:0c:41:17 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-630116 Clientid:01:52:54:00:0c:41:17}
	I0210 12:22:30.021762  646150 main.go:141] libmachine: (ha-630116) DBG | domain ha-630116 has defined IP address 192.168.39.2 and MAC address 52:54:00:0c:41:17 in network mk-ha-630116
	I0210 12:22:30.021927  646150 main.go:141] libmachine: (ha-630116) Calling .GetSSHPort
	I0210 12:22:30.022087  646150 main.go:141] libmachine: (ha-630116) Calling .GetSSHKeyPath
	I0210 12:22:30.022237  646150 main.go:141] libmachine: (ha-630116) Calling .GetSSHUsername
	I0210 12:22:30.022401  646150 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/ha-630116/id_rsa Username:docker}
	I0210 12:22:30.109520  646150 ssh_runner.go:195] Run: systemctl --version
	I0210 12:22:30.116098  646150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 12:22:30.133012  646150 kubeconfig.go:125] found "ha-630116" server: "https://192.168.39.254:8443"
	I0210 12:22:30.133061  646150 api_server.go:166] Checking apiserver status ...
	I0210 12:22:30.133126  646150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:22:30.148480  646150 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup
	W0210 12:22:30.157349  646150 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0210 12:22:30.157421  646150 ssh_runner.go:195] Run: ls
	I0210 12:22:30.161429  646150 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0210 12:22:30.167772  646150 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0210 12:22:30.167801  646150 status.go:463] ha-630116 apiserver status = Running (err=<nil>)
	I0210 12:22:30.167812  646150 status.go:176] ha-630116 status: &{Name:ha-630116 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 12:22:30.167829  646150 status.go:174] checking status of ha-630116-m02 ...
	I0210 12:22:30.168205  646150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:22:30.168256  646150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:22:30.183832  646150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41193
	I0210 12:22:30.184332  646150 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:22:30.184897  646150 main.go:141] libmachine: Using API Version  1
	I0210 12:22:30.184921  646150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:22:30.185283  646150 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:22:30.185499  646150 main.go:141] libmachine: (ha-630116-m02) Calling .GetState
	I0210 12:22:30.186988  646150 status.go:371] ha-630116-m02 host status = "Stopped" (err=<nil>)
	I0210 12:22:30.187005  646150 status.go:384] host is not running, skipping remaining checks
	I0210 12:22:30.187012  646150 status.go:176] ha-630116-m02 status: &{Name:ha-630116-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 12:22:30.187033  646150 status.go:174] checking status of ha-630116-m03 ...
	I0210 12:22:30.187328  646150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:22:30.187377  646150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:22:30.202720  646150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35975
	I0210 12:22:30.203116  646150 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:22:30.203545  646150 main.go:141] libmachine: Using API Version  1
	I0210 12:22:30.203565  646150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:22:30.203842  646150 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:22:30.204030  646150 main.go:141] libmachine: (ha-630116-m03) Calling .GetState
	I0210 12:22:30.205591  646150 status.go:371] ha-630116-m03 host status = "Running" (err=<nil>)
	I0210 12:22:30.205611  646150 host.go:66] Checking if "ha-630116-m03" exists ...
	I0210 12:22:30.205961  646150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:22:30.206005  646150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:22:30.220627  646150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42545
	I0210 12:22:30.221062  646150 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:22:30.221589  646150 main.go:141] libmachine: Using API Version  1
	I0210 12:22:30.221608  646150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:22:30.221929  646150 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:22:30.222146  646150 main.go:141] libmachine: (ha-630116-m03) Calling .GetIP
	I0210 12:22:30.224807  646150 main.go:141] libmachine: (ha-630116-m03) DBG | domain ha-630116-m03 has defined MAC address 52:54:00:5a:79:56 in network mk-ha-630116
	I0210 12:22:30.225297  646150 main.go:141] libmachine: (ha-630116-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:79:56", ip: ""} in network mk-ha-630116: {Iface:virbr1 ExpiryTime:2025-02-10 13:18:45 +0000 UTC Type:0 Mac:52:54:00:5a:79:56 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-630116-m03 Clientid:01:52:54:00:5a:79:56}
	I0210 12:22:30.225336  646150 main.go:141] libmachine: (ha-630116-m03) DBG | domain ha-630116-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:5a:79:56 in network mk-ha-630116
	I0210 12:22:30.225463  646150 host.go:66] Checking if "ha-630116-m03" exists ...
	I0210 12:22:30.225753  646150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:22:30.225793  646150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:22:30.240453  646150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43283
	I0210 12:22:30.240824  646150 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:22:30.241366  646150 main.go:141] libmachine: Using API Version  1
	I0210 12:22:30.241387  646150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:22:30.241679  646150 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:22:30.241889  646150 main.go:141] libmachine: (ha-630116-m03) Calling .DriverName
	I0210 12:22:30.242086  646150 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 12:22:30.242126  646150 main.go:141] libmachine: (ha-630116-m03) Calling .GetSSHHostname
	I0210 12:22:30.245194  646150 main.go:141] libmachine: (ha-630116-m03) DBG | domain ha-630116-m03 has defined MAC address 52:54:00:5a:79:56 in network mk-ha-630116
	I0210 12:22:30.245608  646150 main.go:141] libmachine: (ha-630116-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:79:56", ip: ""} in network mk-ha-630116: {Iface:virbr1 ExpiryTime:2025-02-10 13:18:45 +0000 UTC Type:0 Mac:52:54:00:5a:79:56 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-630116-m03 Clientid:01:52:54:00:5a:79:56}
	I0210 12:22:30.245629  646150 main.go:141] libmachine: (ha-630116-m03) DBG | domain ha-630116-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:5a:79:56 in network mk-ha-630116
	I0210 12:22:30.245789  646150 main.go:141] libmachine: (ha-630116-m03) Calling .GetSSHPort
	I0210 12:22:30.245967  646150 main.go:141] libmachine: (ha-630116-m03) Calling .GetSSHKeyPath
	I0210 12:22:30.246154  646150 main.go:141] libmachine: (ha-630116-m03) Calling .GetSSHUsername
	I0210 12:22:30.246278  646150 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/ha-630116-m03/id_rsa Username:docker}
	I0210 12:22:30.329193  646150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 12:22:30.345063  646150 kubeconfig.go:125] found "ha-630116" server: "https://192.168.39.254:8443"
	I0210 12:22:30.345135  646150 api_server.go:166] Checking apiserver status ...
	I0210 12:22:30.345222  646150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:22:30.359638  646150 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup
	W0210 12:22:30.367960  646150 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0210 12:22:30.368030  646150 ssh_runner.go:195] Run: ls
	I0210 12:22:30.371769  646150 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0210 12:22:30.376367  646150 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0210 12:22:30.376387  646150 status.go:463] ha-630116-m03 apiserver status = Running (err=<nil>)
	I0210 12:22:30.376395  646150 status.go:176] ha-630116-m03 status: &{Name:ha-630116-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 12:22:30.376411  646150 status.go:174] checking status of ha-630116-m04 ...
	I0210 12:22:30.376826  646150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:22:30.376881  646150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:22:30.392323  646150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35395
	I0210 12:22:30.392735  646150 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:22:30.393344  646150 main.go:141] libmachine: Using API Version  1
	I0210 12:22:30.393365  646150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:22:30.393786  646150 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:22:30.393940  646150 main.go:141] libmachine: (ha-630116-m04) Calling .GetState
	I0210 12:22:30.395588  646150 status.go:371] ha-630116-m04 host status = "Running" (err=<nil>)
	I0210 12:22:30.395612  646150 host.go:66] Checking if "ha-630116-m04" exists ...
	I0210 12:22:30.395874  646150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:22:30.395912  646150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:22:30.410455  646150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33549
	I0210 12:22:30.410852  646150 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:22:30.411306  646150 main.go:141] libmachine: Using API Version  1
	I0210 12:22:30.411326  646150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:22:30.411611  646150 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:22:30.411778  646150 main.go:141] libmachine: (ha-630116-m04) Calling .GetIP
	I0210 12:22:30.414367  646150 main.go:141] libmachine: (ha-630116-m04) DBG | domain ha-630116-m04 has defined MAC address 52:54:00:f5:22:cb in network mk-ha-630116
	I0210 12:22:30.414758  646150 main.go:141] libmachine: (ha-630116-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:22:cb", ip: ""} in network mk-ha-630116: {Iface:virbr1 ExpiryTime:2025-02-10 13:20:05 +0000 UTC Type:0 Mac:52:54:00:f5:22:cb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-630116-m04 Clientid:01:52:54:00:f5:22:cb}
	I0210 12:22:30.414793  646150 main.go:141] libmachine: (ha-630116-m04) DBG | domain ha-630116-m04 has defined IP address 192.168.39.190 and MAC address 52:54:00:f5:22:cb in network mk-ha-630116
	I0210 12:22:30.414942  646150 host.go:66] Checking if "ha-630116-m04" exists ...
	I0210 12:22:30.415238  646150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:22:30.415272  646150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:22:30.430042  646150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41829
	I0210 12:22:30.430503  646150 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:22:30.430971  646150 main.go:141] libmachine: Using API Version  1
	I0210 12:22:30.430992  646150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:22:30.431300  646150 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:22:30.431481  646150 main.go:141] libmachine: (ha-630116-m04) Calling .DriverName
	I0210 12:22:30.431660  646150 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 12:22:30.431689  646150 main.go:141] libmachine: (ha-630116-m04) Calling .GetSSHHostname
	I0210 12:22:30.434124  646150 main.go:141] libmachine: (ha-630116-m04) DBG | domain ha-630116-m04 has defined MAC address 52:54:00:f5:22:cb in network mk-ha-630116
	I0210 12:22:30.434580  646150 main.go:141] libmachine: (ha-630116-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:22:cb", ip: ""} in network mk-ha-630116: {Iface:virbr1 ExpiryTime:2025-02-10 13:20:05 +0000 UTC Type:0 Mac:52:54:00:f5:22:cb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-630116-m04 Clientid:01:52:54:00:f5:22:cb}
	I0210 12:22:30.434613  646150 main.go:141] libmachine: (ha-630116-m04) DBG | domain ha-630116-m04 has defined IP address 192.168.39.190 and MAC address 52:54:00:f5:22:cb in network mk-ha-630116
	I0210 12:22:30.434753  646150 main.go:141] libmachine: (ha-630116-m04) Calling .GetSSHPort
	I0210 12:22:30.434948  646150 main.go:141] libmachine: (ha-630116-m04) Calling .GetSSHKeyPath
	I0210 12:22:30.435102  646150 main.go:141] libmachine: (ha-630116-m04) Calling .GetSSHUsername
	I0210 12:22:30.435262  646150 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/ha-630116-m04/id_rsa Username:docker}
	I0210 12:22:30.521803  646150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 12:22:30.540117  646150 status.go:176] ha-630116-m04 status: &{Name:ha-630116-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 node start m02 -v=7 --alsologtostderr
E0210 12:22:36.338677  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-630116 node start m02 -v=7 --alsologtostderr: (47.56934206s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (48.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (445.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-630116 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-630116 -v=7 --alsologtostderr
E0210 12:23:30.348372  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:25:46.485284  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:26:14.190559  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:27:36.338749  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-630116 -v=7 --alsologtostderr: (4m34.177609913s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-630116 --wait=true -v=7 --alsologtostderr
E0210 12:28:59.414814  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-630116 --wait=true -v=7 --alsologtostderr: (2m51.42853352s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-630116
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (445.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 node delete m03 -v=7 --alsologtostderr
E0210 12:30:46.485883  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-630116 node delete m03 -v=7 --alsologtostderr: (17.634963696s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 stop -v=7 --alsologtostderr
E0210 12:32:36.337632  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-630116 stop -v=7 --alsologtostderr: (4m32.614106256s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-630116 status -v=7 --alsologtostderr: exit status 7 (117.22837ms)

                                                
                                                
-- stdout --
	ha-630116
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-630116-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-630116-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 12:35:37.966592  650330 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:35:37.966721  650330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:35:37.966731  650330 out.go:358] Setting ErrFile to fd 2...
	I0210 12:35:37.966736  650330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:35:37.966937  650330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
	I0210 12:35:37.967194  650330 out.go:352] Setting JSON to false
	I0210 12:35:37.967226  650330 mustload.go:65] Loading cluster: ha-630116
	I0210 12:35:37.967302  650330 notify.go:220] Checking for updates...
	I0210 12:35:37.967654  650330 config.go:182] Loaded profile config "ha-630116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 12:35:37.967678  650330 status.go:174] checking status of ha-630116 ...
	I0210 12:35:37.968132  650330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:35:37.968178  650330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:35:37.990131  650330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43199
	I0210 12:35:37.990608  650330 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:35:37.991237  650330 main.go:141] libmachine: Using API Version  1
	I0210 12:35:37.991260  650330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:35:37.991674  650330 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:35:37.991918  650330 main.go:141] libmachine: (ha-630116) Calling .GetState
	I0210 12:35:37.993645  650330 status.go:371] ha-630116 host status = "Stopped" (err=<nil>)
	I0210 12:35:37.993666  650330 status.go:384] host is not running, skipping remaining checks
	I0210 12:35:37.993674  650330 status.go:176] ha-630116 status: &{Name:ha-630116 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 12:35:37.993718  650330 status.go:174] checking status of ha-630116-m02 ...
	I0210 12:35:37.994017  650330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:35:37.994056  650330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:35:38.009680  650330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34757
	I0210 12:35:38.010089  650330 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:35:38.010594  650330 main.go:141] libmachine: Using API Version  1
	I0210 12:35:38.010617  650330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:35:38.010917  650330 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:35:38.011132  650330 main.go:141] libmachine: (ha-630116-m02) Calling .GetState
	I0210 12:35:38.012870  650330 status.go:371] ha-630116-m02 host status = "Stopped" (err=<nil>)
	I0210 12:35:38.012887  650330 status.go:384] host is not running, skipping remaining checks
	I0210 12:35:38.012893  650330 status.go:176] ha-630116-m02 status: &{Name:ha-630116-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 12:35:38.012910  650330 status.go:174] checking status of ha-630116-m04 ...
	I0210 12:35:38.013256  650330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:35:38.013306  650330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:35:38.028774  650330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42707
	I0210 12:35:38.029336  650330 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:35:38.029847  650330 main.go:141] libmachine: Using API Version  1
	I0210 12:35:38.029874  650330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:35:38.030240  650330 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:35:38.030435  650330 main.go:141] libmachine: (ha-630116-m04) Calling .GetState
	I0210 12:35:38.031883  650330 status.go:371] ha-630116-m04 host status = "Stopped" (err=<nil>)
	I0210 12:35:38.031900  650330 status.go:384] host is not running, skipping remaining checks
	I0210 12:35:38.031907  650330 status.go:176] ha-630116-m04 status: &{Name:ha-630116-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (99.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-630116 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0210 12:35:46.485757  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:37:09.552231  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-630116 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m38.803101331s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (99.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-630116 --control-plane -v=7 --alsologtostderr
E0210 12:37:36.338149  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-630116 --control-plane -v=7 --alsologtostderr: (1m17.104752333s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-630116 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.43s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-973151 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-973151 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (55.425939068s)
--- PASS: TestJSONOutput/start/Command (55.43s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-973151 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-973151 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-973151 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-973151 --output=json --user=testUser: (7.362000505s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-067451 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-067451 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (67.000667ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dcee4d07-b21e-47e2-b65a-57283474b488","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-067451] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"12c84394-e30b-492f-9dfa-4fac0fb4bdf2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20383"}}
	{"specversion":"1.0","id":"87034177-3850-4861-bea8-e965ec834abf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bfa88f95-44c5-42d6-9fbc-1913a0337f72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig"}}
	{"specversion":"1.0","id":"189a9f11-bac5-4a22-a5ea-be1b15d4e790","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube"}}
	{"specversion":"1.0","id":"c030d670-af08-46ac-b82b-82dbda10b08a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1f902135-deed-4786-997e-ca0fda7926be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6c3110ab-3edb-402a-b358-6c511280d194","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-067451" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-067451
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (93.14s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-722530 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-722530 --driver=kvm2  --container-runtime=crio: (47.019161392s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-746717 --driver=kvm2  --container-runtime=crio
E0210 12:40:46.485609  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-746717 --driver=kvm2  --container-runtime=crio: (43.248257885s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-722530
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-746717
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-746717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-746717
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-746717: (1.007842329s)
helpers_test.go:175: Cleaning up "first-722530" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-722530
--- PASS: TestMinikubeProfile (93.14s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-205390 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-205390 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.8148756s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-205390 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-205390 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-221560 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-221560 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.694909005s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-221560 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-221560 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-205390 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-221560 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-221560 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-221560
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-221560: (1.281813984s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.89s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-221560
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-221560: (21.893380538s)
E0210 12:42:36.337921  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMountStart/serial/RestartStopped (22.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-221560 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-221560 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-790589 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-790589 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m50.740183938s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790589 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790589 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-790589 -- rollout status deployment/busybox: (3.455007986s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790589 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790589 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790589 -- exec busybox-58667487b6-65px9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790589 -- exec busybox-58667487b6-vnrrz -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790589 -- exec busybox-58667487b6-65px9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790589 -- exec busybox-58667487b6-vnrrz -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790589 -- exec busybox-58667487b6-65px9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790589 -- exec busybox-58667487b6-vnrrz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790589 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790589 -- exec busybox-58667487b6-65px9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790589 -- exec busybox-58667487b6-65px9 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790589 -- exec busybox-58667487b6-vnrrz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790589 -- exec busybox-58667487b6-vnrrz -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-790589 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-790589 -v 3 --alsologtostderr: (51.420440036s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.98s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-790589 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.56s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 cp testdata/cp-test.txt multinode-790589:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 ssh -n multinode-790589 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 cp multinode-790589:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4209733324/001/cp-test_multinode-790589.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 ssh -n multinode-790589 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 cp multinode-790589:/home/docker/cp-test.txt multinode-790589-m02:/home/docker/cp-test_multinode-790589_multinode-790589-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 ssh -n multinode-790589 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 ssh -n multinode-790589-m02 "sudo cat /home/docker/cp-test_multinode-790589_multinode-790589-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 cp multinode-790589:/home/docker/cp-test.txt multinode-790589-m03:/home/docker/cp-test_multinode-790589_multinode-790589-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 ssh -n multinode-790589 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 ssh -n multinode-790589-m03 "sudo cat /home/docker/cp-test_multinode-790589_multinode-790589-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 cp testdata/cp-test.txt multinode-790589-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 ssh -n multinode-790589-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 cp multinode-790589-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4209733324/001/cp-test_multinode-790589-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 ssh -n multinode-790589-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 cp multinode-790589-m02:/home/docker/cp-test.txt multinode-790589:/home/docker/cp-test_multinode-790589-m02_multinode-790589.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 ssh -n multinode-790589-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 ssh -n multinode-790589 "sudo cat /home/docker/cp-test_multinode-790589-m02_multinode-790589.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 cp multinode-790589-m02:/home/docker/cp-test.txt multinode-790589-m03:/home/docker/cp-test_multinode-790589-m02_multinode-790589-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 ssh -n multinode-790589-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 ssh -n multinode-790589-m03 "sudo cat /home/docker/cp-test_multinode-790589-m02_multinode-790589-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 cp testdata/cp-test.txt multinode-790589-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 ssh -n multinode-790589-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 cp multinode-790589-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4209733324/001/cp-test_multinode-790589-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 ssh -n multinode-790589-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 cp multinode-790589-m03:/home/docker/cp-test.txt multinode-790589:/home/docker/cp-test_multinode-790589-m03_multinode-790589.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 ssh -n multinode-790589-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 ssh -n multinode-790589 "sudo cat /home/docker/cp-test_multinode-790589-m03_multinode-790589.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 cp multinode-790589-m03:/home/docker/cp-test.txt multinode-790589-m02:/home/docker/cp-test_multinode-790589-m03_multinode-790589-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 ssh -n multinode-790589-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 ssh -n multinode-790589-m02 "sudo cat /home/docker/cp-test_multinode-790589-m03_multinode-790589-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-790589 node stop m03: (1.397489865s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-790589 status: exit status 7 (423.792104ms)

                                                
                                                
-- stdout --
	multinode-790589
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-790589-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-790589-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-790589 status --alsologtostderr: exit status 7 (425.891562ms)

                                                
                                                
-- stdout --
	multinode-790589
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-790589-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-790589-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 12:45:36.883089  658020 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:45:36.883199  658020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:45:36.883210  658020 out.go:358] Setting ErrFile to fd 2...
	I0210 12:45:36.883217  658020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:45:36.883452  658020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
	I0210 12:45:36.883628  658020 out.go:352] Setting JSON to false
	I0210 12:45:36.883659  658020 mustload.go:65] Loading cluster: multinode-790589
	I0210 12:45:36.883708  658020 notify.go:220] Checking for updates...
	I0210 12:45:36.884204  658020 config.go:182] Loaded profile config "multinode-790589": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 12:45:36.884236  658020 status.go:174] checking status of multinode-790589 ...
	I0210 12:45:36.884756  658020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:36.884802  658020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:36.902054  658020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38247
	I0210 12:45:36.902475  658020 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:36.903164  658020 main.go:141] libmachine: Using API Version  1
	I0210 12:45:36.903215  658020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:36.903533  658020 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:36.903755  658020 main.go:141] libmachine: (multinode-790589) Calling .GetState
	I0210 12:45:36.905447  658020 status.go:371] multinode-790589 host status = "Running" (err=<nil>)
	I0210 12:45:36.905467  658020 host.go:66] Checking if "multinode-790589" exists ...
	I0210 12:45:36.905896  658020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:36.905945  658020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:36.921642  658020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33529
	I0210 12:45:36.922050  658020 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:36.922515  658020 main.go:141] libmachine: Using API Version  1
	I0210 12:45:36.922538  658020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:36.922824  658020 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:36.923039  658020 main.go:141] libmachine: (multinode-790589) Calling .GetIP
	I0210 12:45:36.925771  658020 main.go:141] libmachine: (multinode-790589) DBG | domain multinode-790589 has defined MAC address 52:54:00:70:6f:04 in network mk-multinode-790589
	I0210 12:45:36.926208  658020 main.go:141] libmachine: (multinode-790589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:6f:04", ip: ""} in network mk-multinode-790589: {Iface:virbr1 ExpiryTime:2025-02-10 13:42:52 +0000 UTC Type:0 Mac:52:54:00:70:6f:04 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:multinode-790589 Clientid:01:52:54:00:70:6f:04}
	I0210 12:45:36.926235  658020 main.go:141] libmachine: (multinode-790589) DBG | domain multinode-790589 has defined IP address 192.168.39.89 and MAC address 52:54:00:70:6f:04 in network mk-multinode-790589
	I0210 12:45:36.926364  658020 host.go:66] Checking if "multinode-790589" exists ...
	I0210 12:45:36.926659  658020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:36.926707  658020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:36.942819  658020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44735
	I0210 12:45:36.943232  658020 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:36.943717  658020 main.go:141] libmachine: Using API Version  1
	I0210 12:45:36.943742  658020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:36.944096  658020 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:36.944292  658020 main.go:141] libmachine: (multinode-790589) Calling .DriverName
	I0210 12:45:36.944475  658020 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 12:45:36.944497  658020 main.go:141] libmachine: (multinode-790589) Calling .GetSSHHostname
	I0210 12:45:36.947267  658020 main.go:141] libmachine: (multinode-790589) DBG | domain multinode-790589 has defined MAC address 52:54:00:70:6f:04 in network mk-multinode-790589
	I0210 12:45:36.947658  658020 main.go:141] libmachine: (multinode-790589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:6f:04", ip: ""} in network mk-multinode-790589: {Iface:virbr1 ExpiryTime:2025-02-10 13:42:52 +0000 UTC Type:0 Mac:52:54:00:70:6f:04 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:multinode-790589 Clientid:01:52:54:00:70:6f:04}
	I0210 12:45:36.947681  658020 main.go:141] libmachine: (multinode-790589) DBG | domain multinode-790589 has defined IP address 192.168.39.89 and MAC address 52:54:00:70:6f:04 in network mk-multinode-790589
	I0210 12:45:36.947788  658020 main.go:141] libmachine: (multinode-790589) Calling .GetSSHPort
	I0210 12:45:36.947974  658020 main.go:141] libmachine: (multinode-790589) Calling .GetSSHKeyPath
	I0210 12:45:36.948127  658020 main.go:141] libmachine: (multinode-790589) Calling .GetSSHUsername
	I0210 12:45:36.948258  658020 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/multinode-790589/id_rsa Username:docker}
	I0210 12:45:37.028130  658020 ssh_runner.go:195] Run: systemctl --version
	I0210 12:45:37.033720  658020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 12:45:37.047749  658020 kubeconfig.go:125] found "multinode-790589" server: "https://192.168.39.89:8443"
	I0210 12:45:37.047800  658020 api_server.go:166] Checking apiserver status ...
	I0210 12:45:37.047842  658020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:45:37.059795  658020 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup
	W0210 12:45:37.068440  658020 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0210 12:45:37.068505  658020 ssh_runner.go:195] Run: ls
	I0210 12:45:37.072320  658020 api_server.go:253] Checking apiserver healthz at https://192.168.39.89:8443/healthz ...
	I0210 12:45:37.076506  658020 api_server.go:279] https://192.168.39.89:8443/healthz returned 200:
	ok
	I0210 12:45:37.076532  658020 status.go:463] multinode-790589 apiserver status = Running (err=<nil>)
	I0210 12:45:37.076541  658020 status.go:176] multinode-790589 status: &{Name:multinode-790589 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 12:45:37.076571  658020 status.go:174] checking status of multinode-790589-m02 ...
	I0210 12:45:37.076892  658020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.076924  658020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.093234  658020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43077
	I0210 12:45:37.093720  658020 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.094315  658020 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.094355  658020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.094719  658020 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.094931  658020 main.go:141] libmachine: (multinode-790589-m02) Calling .GetState
	I0210 12:45:37.096564  658020 status.go:371] multinode-790589-m02 host status = "Running" (err=<nil>)
	I0210 12:45:37.096584  658020 host.go:66] Checking if "multinode-790589-m02" exists ...
	I0210 12:45:37.096900  658020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.096925  658020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.113064  658020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38019
	I0210 12:45:37.113557  658020 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.114052  658020 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.114077  658020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.114419  658020 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.114588  658020 main.go:141] libmachine: (multinode-790589-m02) Calling .GetIP
	I0210 12:45:37.117593  658020 main.go:141] libmachine: (multinode-790589-m02) DBG | domain multinode-790589-m02 has defined MAC address 52:54:00:5a:b1:83 in network mk-multinode-790589
	I0210 12:45:37.118013  658020 main.go:141] libmachine: (multinode-790589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:b1:83", ip: ""} in network mk-multinode-790589: {Iface:virbr1 ExpiryTime:2025-02-10 13:43:54 +0000 UTC Type:0 Mac:52:54:00:5a:b1:83 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:multinode-790589-m02 Clientid:01:52:54:00:5a:b1:83}
	I0210 12:45:37.118037  658020 main.go:141] libmachine: (multinode-790589-m02) DBG | domain multinode-790589-m02 has defined IP address 192.168.39.12 and MAC address 52:54:00:5a:b1:83 in network mk-multinode-790589
	I0210 12:45:37.118182  658020 host.go:66] Checking if "multinode-790589-m02" exists ...
	I0210 12:45:37.118555  658020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.118608  658020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.135401  658020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41125
	I0210 12:45:37.135802  658020 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.136244  658020 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.136265  658020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.136636  658020 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.136884  658020 main.go:141] libmachine: (multinode-790589-m02) Calling .DriverName
	I0210 12:45:37.137086  658020 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 12:45:37.137127  658020 main.go:141] libmachine: (multinode-790589-m02) Calling .GetSSHHostname
	I0210 12:45:37.139639  658020 main.go:141] libmachine: (multinode-790589-m02) DBG | domain multinode-790589-m02 has defined MAC address 52:54:00:5a:b1:83 in network mk-multinode-790589
	I0210 12:45:37.140098  658020 main.go:141] libmachine: (multinode-790589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:b1:83", ip: ""} in network mk-multinode-790589: {Iface:virbr1 ExpiryTime:2025-02-10 13:43:54 +0000 UTC Type:0 Mac:52:54:00:5a:b1:83 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:multinode-790589-m02 Clientid:01:52:54:00:5a:b1:83}
	I0210 12:45:37.140123  658020 main.go:141] libmachine: (multinode-790589-m02) DBG | domain multinode-790589-m02 has defined IP address 192.168.39.12 and MAC address 52:54:00:5a:b1:83 in network mk-multinode-790589
	I0210 12:45:37.140249  658020 main.go:141] libmachine: (multinode-790589-m02) Calling .GetSSHPort
	I0210 12:45:37.140436  658020 main.go:141] libmachine: (multinode-790589-m02) Calling .GetSSHKeyPath
	I0210 12:45:37.140605  658020 main.go:141] libmachine: (multinode-790589-m02) Calling .GetSSHUsername
	I0210 12:45:37.140753  658020 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20383-625153/.minikube/machines/multinode-790589-m02/id_rsa Username:docker}
	I0210 12:45:37.225031  658020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 12:45:37.238769  658020 status.go:176] multinode-790589-m02 status: &{Name:multinode-790589-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0210 12:45:37.238804  658020 status.go:174] checking status of multinode-790589-m03 ...
	I0210 12:45:37.239141  658020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.239174  658020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.255699  658020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
	I0210 12:45:37.256159  658020 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.256716  658020 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.256738  658020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.257079  658020 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.257300  658020 main.go:141] libmachine: (multinode-790589-m03) Calling .GetState
	I0210 12:45:37.258872  658020 status.go:371] multinode-790589-m03 host status = "Stopped" (err=<nil>)
	I0210 12:45:37.258888  658020 status.go:384] host is not running, skipping remaining checks
	I0210 12:45:37.258895  658020 status.go:176] multinode-790589-m03 status: &{Name:multinode-790589-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 node start m03 -v=7 --alsologtostderr
E0210 12:45:39.416881  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:45:46.485532  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-790589 node start m03 -v=7 --alsologtostderr: (39.342525965s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (340.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-790589
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-790589
E0210 12:47:36.347010  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-790589: (3m3.085437914s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-790589 --wait=true -v=8 --alsologtostderr
E0210 12:50:46.485516  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-790589 --wait=true -v=8 --alsologtostderr: (2m37.719725564s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-790589
--- PASS: TestMultiNode/serial/RestartKeepsNodes (340.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-790589 node delete m03: (2.211769205s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 stop
E0210 12:52:36.347020  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:53:49.556389  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-790589 stop: (3m1.508215219s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-790589 status: exit status 7 (92.600322ms)

                                                
                                                
-- stdout --
	multinode-790589
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-790589-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-790589 status --alsologtostderr: exit status 7 (87.776017ms)

                                                
                                                
-- stdout --
	multinode-790589
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-790589-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 12:55:02.549993  661045 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:55:02.550264  661045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:55:02.550273  661045 out.go:358] Setting ErrFile to fd 2...
	I0210 12:55:02.550276  661045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:55:02.550458  661045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
	I0210 12:55:02.550623  661045 out.go:352] Setting JSON to false
	I0210 12:55:02.550652  661045 mustload.go:65] Loading cluster: multinode-790589
	I0210 12:55:02.550748  661045 notify.go:220] Checking for updates...
	I0210 12:55:02.551055  661045 config.go:182] Loaded profile config "multinode-790589": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 12:55:02.551078  661045 status.go:174] checking status of multinode-790589 ...
	I0210 12:55:02.551520  661045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:55:02.551578  661045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:55:02.566775  661045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39759
	I0210 12:55:02.567268  661045 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:55:02.567837  661045 main.go:141] libmachine: Using API Version  1
	I0210 12:55:02.567858  661045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:55:02.568214  661045 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:55:02.568376  661045 main.go:141] libmachine: (multinode-790589) Calling .GetState
	I0210 12:55:02.570237  661045 status.go:371] multinode-790589 host status = "Stopped" (err=<nil>)
	I0210 12:55:02.570253  661045 status.go:384] host is not running, skipping remaining checks
	I0210 12:55:02.570258  661045 status.go:176] multinode-790589 status: &{Name:multinode-790589 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 12:55:02.570284  661045 status.go:174] checking status of multinode-790589-m02 ...
	I0210 12:55:02.570612  661045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:55:02.570656  661045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:55:02.585838  661045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36595
	I0210 12:55:02.586256  661045 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:55:02.586684  661045 main.go:141] libmachine: Using API Version  1
	I0210 12:55:02.586706  661045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:55:02.586993  661045 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:55:02.587221  661045 main.go:141] libmachine: (multinode-790589-m02) Calling .GetState
	I0210 12:55:02.588789  661045 status.go:371] multinode-790589-m02 host status = "Stopped" (err=<nil>)
	I0210 12:55:02.588800  661045 status.go:384] host is not running, skipping remaining checks
	I0210 12:55:02.588805  661045 status.go:176] multinode-790589-m02 status: &{Name:multinode-790589-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (117.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-790589 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0210 12:55:46.485306  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-790589 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m56.662578025s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790589 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (117.20s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-790589
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-790589-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-790589-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (77.799712ms)

                                                
                                                
-- stdout --
	* [multinode-790589-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20383
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-790589-m02' is duplicated with machine name 'multinode-790589-m02' in profile 'multinode-790589'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-790589-m03 --driver=kvm2  --container-runtime=crio
E0210 12:57:36.339793  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-790589-m03 --driver=kvm2  --container-runtime=crio: (43.13950846s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-790589
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-790589: exit status 80 (215.860499ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-790589 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-790589-m03 already exists in multinode-790589-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-790589-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.28s)

                                                
                                    
x
+
TestScheduledStopUnix (113.94s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-560767 --memory=2048 --driver=kvm2  --container-runtime=crio
E0210 13:02:36.346697  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-560767 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.25081374s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-560767 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-560767 -n scheduled-stop-560767
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-560767 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0210 13:03:09.535341  632352 retry.go:31] will retry after 92.078µs: open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/scheduled-stop-560767/pid: no such file or directory
I0210 13:03:09.536506  632352 retry.go:31] will retry after 105.974µs: open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/scheduled-stop-560767/pid: no such file or directory
I0210 13:03:09.537672  632352 retry.go:31] will retry after 315.461µs: open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/scheduled-stop-560767/pid: no such file or directory
I0210 13:03:09.538834  632352 retry.go:31] will retry after 310.138µs: open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/scheduled-stop-560767/pid: no such file or directory
I0210 13:03:09.540001  632352 retry.go:31] will retry after 407.767µs: open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/scheduled-stop-560767/pid: no such file or directory
I0210 13:03:09.541131  632352 retry.go:31] will retry after 987.341µs: open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/scheduled-stop-560767/pid: no such file or directory
I0210 13:03:09.542274  632352 retry.go:31] will retry after 648.342µs: open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/scheduled-stop-560767/pid: no such file or directory
I0210 13:03:09.543432  632352 retry.go:31] will retry after 1.572998ms: open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/scheduled-stop-560767/pid: no such file or directory
I0210 13:03:09.545649  632352 retry.go:31] will retry after 1.529049ms: open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/scheduled-stop-560767/pid: no such file or directory
I0210 13:03:09.547914  632352 retry.go:31] will retry after 5.554064ms: open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/scheduled-stop-560767/pid: no such file or directory
I0210 13:03:09.554134  632352 retry.go:31] will retry after 3.789314ms: open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/scheduled-stop-560767/pid: no such file or directory
I0210 13:03:09.558383  632352 retry.go:31] will retry after 9.260764ms: open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/scheduled-stop-560767/pid: no such file or directory
I0210 13:03:09.568605  632352 retry.go:31] will retry after 13.873199ms: open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/scheduled-stop-560767/pid: no such file or directory
I0210 13:03:09.582872  632352 retry.go:31] will retry after 16.817069ms: open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/scheduled-stop-560767/pid: no such file or directory
I0210 13:03:09.600168  632352 retry.go:31] will retry after 33.091829ms: open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/scheduled-stop-560767/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-560767 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-560767 -n scheduled-stop-560767
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-560767
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-560767 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-560767
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-560767: exit status 7 (69.215641ms)

                                                
                                                
-- stdout --
	scheduled-stop-560767
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-560767 -n scheduled-stop-560767
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-560767 -n scheduled-stop-560767: exit status 7 (72.291568ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-560767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-560767
--- PASS: TestScheduledStopUnix (113.94s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (213.78s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1936302330 start -p running-upgrade-123942 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1936302330 start -p running-upgrade-123942 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m27.369174628s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-123942 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-123942 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (2m4.663732297s)
helpers_test.go:175: Cleaning up "running-upgrade-123942" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-123942
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-123942: (1.293293395s)
--- PASS: TestRunningBinaryUpgrade (213.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-125233 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-125233 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (96.981176ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-125233] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20383
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (116.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-125233 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-125233 --driver=kvm2  --container-runtime=crio: (1m56.116193487s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-125233 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (116.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-125233 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-125233 --no-kubernetes --driver=kvm2  --container-runtime=crio: (13.626332572s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-125233 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-125233 status -o json: exit status 2 (226.168497ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-125233","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-125233
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-125233: (1.269012937s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-125233 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-125233 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.250508532s)
--- PASS: TestNoKubernetes/serial/Start (28.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-125233 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-125233 "sudo systemctl is-active --quiet service kubelet": exit status 1 (204.384776ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-125233
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-125233: (1.299500692s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (60.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-125233 --driver=kvm2  --container-runtime=crio
E0210 13:07:36.337816  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-125233 --driver=kvm2  --container-runtime=crio: (1m0.734212095s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (60.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-125233 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-125233 "sudo systemctl is-active --quiet service kubelet": exit status 1 (226.048028ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (149.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2058760537 start -p stopped-upgrade-683993 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2058760537 start -p stopped-upgrade-683993 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m19.64598687s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2058760537 -p stopped-upgrade-683993 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2058760537 -p stopped-upgrade-683993 stop: (2.158783835s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-683993 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-683993 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m7.530284696s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (149.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-651187 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-651187 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (122.433226ms)

                                                
                                                
-- stdout --
	* [false-651187] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20383
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 13:09:05.410516  669897 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:09:05.410667  669897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:09:05.410673  669897 out.go:358] Setting ErrFile to fd 2...
	I0210 13:09:05.410677  669897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:09:05.410910  669897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20383-625153/.minikube/bin
	I0210 13:09:05.411615  669897 out.go:352] Setting JSON to false
	I0210 13:09:05.412735  669897 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":17495,"bootTime":1739175450,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 13:09:05.412816  669897 start.go:139] virtualization: kvm guest
	I0210 13:09:05.415139  669897 out.go:177] * [false-651187] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 13:09:05.420171  669897 out.go:177]   - MINIKUBE_LOCATION=20383
	I0210 13:09:05.420259  669897 notify.go:220] Checking for updates...
	I0210 13:09:05.422730  669897 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 13:09:05.424055  669897 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20383-625153/kubeconfig
	I0210 13:09:05.425407  669897 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20383-625153/.minikube
	I0210 13:09:05.426710  669897 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 13:09:05.429659  669897 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 13:09:05.431663  669897 config.go:182] Loaded profile config "kubernetes-upgrade-284631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0210 13:09:05.431794  669897 config.go:182] Loaded profile config "running-upgrade-123942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0210 13:09:05.431880  669897 config.go:182] Loaded profile config "stopped-upgrade-683993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0210 13:09:05.431965  669897 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 13:09:05.470929  669897 out.go:177] * Using the kvm2 driver based on user configuration
	I0210 13:09:05.472206  669897 start.go:297] selected driver: kvm2
	I0210 13:09:05.472225  669897 start.go:901] validating driver "kvm2" against <nil>
	I0210 13:09:05.472237  669897 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 13:09:05.474366  669897 out.go:201] 
	W0210 13:09:05.475628  669897 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0210 13:09:05.476978  669897 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-651187 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-651187

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-651187

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-651187

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-651187

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-651187

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-651187

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-651187

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-651187

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-651187

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-651187

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-651187

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-651187" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-651187" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-651187

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-651187"

                                                
                                                
----------------------- debugLogs end: false-651187 [took: 5.818007936s] --------------------------------
helpers_test.go:175: Cleaning up "false-651187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-651187
--- PASS: TestNetworkPlugins/group/false (6.13s)

                                                
                                    
x
+
TestPause/serial/Start (59.11s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-264226 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-264226 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (59.10894382s)
--- PASS: TestPause/serial/Start (59.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (81.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-651187 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-651187 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m21.175857817s)
--- PASS: TestNetworkPlugins/group/auto/Start (81.18s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (47.76s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-264226 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0210 13:10:29.558168  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-264226 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.730106369s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (47.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-683993
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (76.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-651187 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0210 13:10:46.485996  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-651187 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m16.765332625s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (76.77s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-264226 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-264226 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-264226 --output=json --layout=cluster: exit status 2 (241.122434ms)

                                                
                                                
-- stdout --
	{"Name":"pause-264226","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-264226","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.6s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-264226 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.60s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.73s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-264226 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.73s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.99s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-264226 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.99s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (84.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-651187 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-651187 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m24.116211877s)
--- PASS: TestNetworkPlugins/group/calico/Start (84.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-651187 "pgrep -a kubelet"
I0210 13:11:36.213037  632352 config.go:182] Loaded profile config "auto-651187": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-651187 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-r9dwv" [032bde01-bd89-4fb6-87d2-c548063dac30] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-r9dwv" [032bde01-bd89-4fb6-87d2-c548063dac30] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003722466s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-651187 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-651187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-651187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-8gd5c" [7034cf35-2cc5-41f6-98b8-31114bad3f60] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0031388s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-651187 "pgrep -a kubelet"
I0210 13:11:59.594649  632352 config.go:182] Loaded profile config "kindnet-651187": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-651187 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-xjcfb" [1c59299a-5e3f-4b34-a0a0-366589b661d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-xjcfb" [1c59299a-5e3f-4b34-a0a0-366589b661d3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.00421928s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-651187 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-651187 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m13.185388362s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-651187 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-651187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-651187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (84.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-651187 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-651187 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m24.99308027s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (84.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-6wcg8" [8aeb5336-7e8d-4b56-ab7d-75d615449906] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004066877s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-651187 "pgrep -a kubelet"
I0210 13:12:42.997954  632352 config.go:182] Loaded profile config "calico-651187": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-651187 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-nq8fw" [402efd4c-6cc2-416a-b043-c99d89707976] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-nq8fw" [402efd4c-6cc2-416a-b043-c99d89707976] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004069213s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-651187 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-651187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-651187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (91.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-651187 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-651187 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m31.043064988s)
--- PASS: TestNetworkPlugins/group/flannel/Start (91.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-651187 "pgrep -a kubelet"
I0210 13:13:17.332443  632352 config.go:182] Loaded profile config "custom-flannel-651187": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-651187 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-hh8x7" [aaa31b81-1a29-4501-8b51-e41ee3976038] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-hh8x7" [aaa31b81-1a29-4501-8b51-e41ee3976038] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004618554s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-651187 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-651187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-651187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (69.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-651187 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-651187 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m9.520589971s)
--- PASS: TestNetworkPlugins/group/bridge/Start (69.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-651187 "pgrep -a kubelet"
I0210 13:13:52.701082  632352 config.go:182] Loaded profile config "enable-default-cni-651187": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-651187 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-vjc5g" [7fb63366-1913-4e9d-8272-baccbe3b38ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-vjc5g" [7fb63366-1913-4e9d-8272-baccbe3b38ee] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003781045s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-651187 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-651187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-651187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6cxqh" [f55d22b3-4b87-4654-ade9-450c5257744b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004168313s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-651187 "pgrep -a kubelet"
I0210 13:14:50.365268  632352 config.go:182] Loaded profile config "flannel-651187": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-651187 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-pbfcg" [324c8f02-b938-44e4-83bd-be3cd4e908bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-pbfcg" [324c8f02-b938-44e4-83bd-be3cd4e908bd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.00395183s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-651187 "pgrep -a kubelet"
I0210 13:14:55.393781  632352 config.go:182] Loaded profile config "bridge-651187": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-651187 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-qst7m" [6d58240e-60d9-46b0-a69d-cd0971b23080] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-qst7m" [6d58240e-60d9-46b0-a69d-cd0971b23080] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003885774s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-651187 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-651187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-651187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-651187 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-651187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-651187 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)
E0210 13:24:20.634075  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-112306 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-112306 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m13.636435773s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (109.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-396582 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0210 13:15:46.485943  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/functional-653300/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-396582 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m49.04249813s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (109.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-112306 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5a51b51a-25ea-4c05-ba85-1455d9dc652b] Pending
helpers_test.go:344: "busybox" [5a51b51a-25ea-4c05-ba85-1455d9dc652b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0210 13:16:36.421410  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:16:36.427770  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:16:36.439236  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:16:36.460672  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:16:36.502222  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:16:36.583700  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:16:36.744985  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:16:37.067025  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:16:37.708811  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [5a51b51a-25ea-4c05-ba85-1455d9dc652b] Running
E0210 13:16:38.990146  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:16:41.551441  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.00498669s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-112306 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-112306 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-112306 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-112306 --alsologtostderr -v=3
E0210 13:16:46.673618  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:16:53.359289  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:16:53.365700  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:16:53.377140  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:16:53.398979  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:16:53.440389  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:16:53.521863  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:16:53.683413  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:16:54.005396  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:16:54.646819  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:16:55.928437  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:16:56.914941  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:16:58.490525  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:17:03.612681  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-112306 --alsologtostderr -v=3: (1m30.809644278s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-396582 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cb179f71-b289-41c4-b5d9-a6fd43ad8444] Pending
helpers_test.go:344: "busybox" [cb179f71-b289-41c4-b5d9-a6fd43ad8444] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0210 13:17:13.855013  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [cb179f71-b289-41c4-b5d9-a6fd43ad8444] Running
E0210 13:17:17.397211  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003707866s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-396582 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-396582 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-396582 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-396582 --alsologtostderr -v=3
E0210 13:17:34.337067  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:17:36.337762  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:17:36.776356  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:17:36.782711  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:17:36.794007  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:17:36.815444  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:17:36.856925  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:17:36.938474  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:17:37.100029  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:17:37.421765  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:17:38.063904  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:17:39.345249  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:17:41.907509  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:17:47.029795  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:17:57.271838  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:17:58.358948  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:15.299482  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-396582 --alsologtostderr -v=3: (1m31.206985193s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-112306 -n no-preload-112306
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-112306 -n no-preload-112306: exit status 7 (77.618636ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-112306 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (314.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-112306 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0210 13:18:17.603309  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:17.609709  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:17.621067  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:17.642462  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:17.683866  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:17.753270  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:17.765675  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:17.927894  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:18.249228  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:18.891417  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:20.172887  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:22.734537  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:27.856477  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:38.098655  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-112306 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (5m14.081613628s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-112306 -n no-preload-112306
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (314.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-396582 -n embed-certs-396582
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-396582 -n embed-certs-396582: exit status 7 (78.635491ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-396582 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (298.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-396582 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0210 13:18:52.928944  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:52.936207  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:52.947812  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:52.969239  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:53.011441  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:53.092988  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:53.254554  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:53.576238  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:54.217831  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:55.500107  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:58.061905  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:58.580772  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:58.715345  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:18:59.420754  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:03.183514  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:13.425596  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:20.280831  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:33.907978  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:37.221744  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:39.542688  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:44.144446  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:44.150900  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:44.162381  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:44.183865  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:44.225290  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:44.306765  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:44.468329  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:44.790579  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:45.432407  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-396582 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (4m58.589030771s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-396582 -n embed-certs-396582
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (298.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-957542 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0210 13:19:54.397398  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:55.623339  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/bridge-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:55.629735  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/bridge-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:55.641117  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/bridge-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:55.662523  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/bridge-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:55.703885  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/bridge-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:55.785421  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/bridge-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:55.946993  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/bridge-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:56.268365  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/bridge-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:56.909665  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/bridge-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:19:58.191557  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/bridge-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:20:00.753579  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/bridge-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:20:04.639526  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:20:05.875724  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/bridge-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:20:14.869994  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:20:16.117308  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/bridge-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:20:20.637451  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-957542 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m25.412207659s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-745712 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-745712 --alsologtostderr -v=3: (2.517998901s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-745712 -n old-k8s-version-745712
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-745712 -n old-k8s-version-745712: exit status 7 (77.730448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-745712 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-957542 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c36a3ee4-9b7f-4609-be42-a61dea0faf6e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0210 13:21:17.560927  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/bridge-651187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [c36a3ee4-9b7f-4609-be42-a61dea0faf6e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003223079s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-957542 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-957542 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-957542 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-957542 --alsologtostderr -v=3
E0210 13:21:36.421664  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:21:36.792235  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:21:53.360033  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:22:04.122377  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/auto-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:22:21.063662  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/kindnet-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:22:28.004425  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:22:36.338404  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/addons-234038/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:22:36.776743  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:22:39.482354  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/bridge-651187/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-957542 --alsologtostderr -v=3: (1m31.020646547s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-957542 -n default-k8s-diff-port-957542
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-957542 -n default-k8s-diff-port-957542: exit status 7 (77.140397ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-957542 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (304.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-957542 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0210 13:23:04.479194  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/calico-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:23:17.603784  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-957542 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (5m4.152055972s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-957542 -n default-k8s-diff-port-957542
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (304.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-rjhfv" [ce7d7817-812a-45a4-b3c1-9ccb3e555eb1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006160076s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-rjhfv" [ce7d7817-812a-45a4-b3c1-9ccb3e555eb1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004541166s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-112306 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-112306 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-112306 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-112306 -n no-preload-112306
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-112306 -n no-preload-112306: exit status 2 (266.066813ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-112306 -n no-preload-112306
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-112306 -n no-preload-112306: exit status 2 (286.192515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-112306 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-112306 -n no-preload-112306
E0210 13:23:45.306567  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/custom-flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-112306 -n no-preload-112306
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-078760 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-078760 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (45.103777966s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-fv9p8" [21205c55-05b7-4084-b0d0-03178c2bb750] Running
E0210 13:23:52.929270  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/enable-default-cni-651187/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003948522s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-fv9p8" [21205c55-05b7-4084-b0d0-03178c2bb750] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003326493s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-396582 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-396582 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-396582 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-396582 -n embed-certs-396582
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-396582 -n embed-certs-396582: exit status 2 (264.02628ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-396582 -n embed-certs-396582
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-396582 -n embed-certs-396582: exit status 2 (263.796295ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-396582 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-396582 -n embed-certs-396582
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-396582 -n embed-certs-396582
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-078760 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-078760 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.120490894s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-078760 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-078760 --alsologtostderr -v=3: (7.390332496s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-078760 -n newest-cni-078760
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-078760 -n newest-cni-078760: exit status 7 (79.697891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-078760 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-078760 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0210 13:24:44.144052  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:24:55.623322  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/bridge-651187/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:25:11.845929  632352 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20383-625153/.minikube/profiles/flannel-651187/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-078760 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (36.34119358s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-078760 -n newest-cni-078760
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-078760 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-078760 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-078760 -n newest-cni-078760
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-078760 -n newest-cni-078760: exit status 2 (247.550128ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-078760 -n newest-cni-078760
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-078760 -n newest-cni-078760: exit status 2 (243.430086ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-078760 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-078760 -n newest-cni-078760
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-078760 -n newest-cni-078760
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-g72kw" [676bc2af-2872-489d-a7a3-88e98be65d38] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0035548s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-g72kw" [676bc2af-2872-489d-a7a3-88e98be65d38] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004179453s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-957542 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-957542 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-957542 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-957542 -n default-k8s-diff-port-957542
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-957542 -n default-k8s-diff-port-957542: exit status 2 (243.797301ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-957542 -n default-k8s-diff-port-957542
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-957542 -n default-k8s-diff-port-957542: exit status 2 (236.226727ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-957542 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-957542 -n default-k8s-diff-port-957542
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-957542 -n default-k8s-diff-port-957542
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.46s)

                                                
                                    

Test skip (40/327)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.1/cached-images 0
15 TestDownloadOnly/v1.32.1/binaries 0
16 TestDownloadOnly/v1.32.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.29
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
267 TestNetworkPlugins/group/kubenet 3.57
275 TestNetworkPlugins/group/cilium 6.32
281 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-234038 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-651187 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-651187

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-651187

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-651187

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-651187

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-651187

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-651187

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-651187

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-651187

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-651187

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-651187

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-651187

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-651187" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-651187" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-651187

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-651187"

                                                
                                                
----------------------- debugLogs end: kubenet-651187 [took: 3.402762501s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-651187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-651187
--- SKIP: TestNetworkPlugins/group/kubenet (3.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-651187 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-651187

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-651187

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-651187

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-651187

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-651187

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-651187

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-651187

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-651187

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-651187

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-651187

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-651187

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-651187" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-651187

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-651187

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-651187

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-651187

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-651187" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-651187" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-651187

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-651187" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-651187"

                                                
                                                
----------------------- debugLogs end: cilium-651187 [took: 6.133414512s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-651187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-651187
--- SKIP: TestNetworkPlugins/group/cilium (6.32s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-349762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-349762
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard